1,102 research outputs found
UMSL Bulletin 2023-2024
The 2023-2024 Bulletin and Course Catalog for the University of Missouri St. Louis.https://irl.umsl.edu/bulletin/1088/thumbnail.jp
Modern computing: Vision and challenges
Over the past six decades, the computing systems field has experienced significant transformations, profoundly impacting society with transformational developments, such as the Internet and the commodification of computing. Underpinned by technological advancements, computer systems, far from being static, have been continuously evolving and adapting to cover multifaceted societal niches. This has led to new paradigms such as cloud, fog, edge computing, and the Internet of Things (IoT), which offer fresh economic and creative opportunities. Nevertheless, this rapid change poses complex research challenges, especially in maximizing potential and enhancing functionality. As such, to maintain an economical level of performance that meets ever-tighter requirements, one must understand the drivers of new model emergence and expansion, and how contemporary challenges differ from past ones. To that end, this article investigates and assesses the factors influencing the evolution of computing systems, covering established systems and architectures as well as newer developments, such as serverless computing, quantum computing, and on-device AI on edge devices. Trends emerge when one traces technological trajectory, which includes the rapid obsolescence of frameworks due to business and technical constraints, a move towards specialized systems and models, and varying approaches to centralized and decentralized control. This comprehensive review of modern computing systems looks ahead to the future of research in the field, highlighting key challenges and emerging trends, and underscoring their importance in cost-effectively driving technological progress
UMSL Bulletin 2022-2023
The 2022-2023 Bulletin and Course Catalog for the University of Missouri St. Louis.https://irl.umsl.edu/bulletin/1087/thumbnail.jp
Deteção de intrusões de rede baseada em anomalias
Dissertação de mestrado integrado em Eletrónica Industrial e ComputadoresAo longo dos últimos anos, a segurança de hardware e software tornou-se uma grande preocupação. À medida
que a complexidade dos sistemas aumenta, as suas vulnerabilidades a sofisticadas técnicas de ataque têm
proporcionalmente escalado. Frequentemente o problema reside na heterogenidade de dispositivos conectados ao
veĂculo, tornando difĂcil a convergĂŞncia da monitorização de todos os protocolos num Ăşnico produto de segurança.
Por esse motivo, o mercado requer ferramentas mais avançadas para a monitorizar ambientes crĂticos Ă vida
humana, tais como os nossos automĂłveis.
Considerando que existem várias formas de interagir com os sistemas de entretenimento do automóvel como
o Bluetooth, o Wi-fi ou CDs multimédia, a necessidade de auditar as suas interfaces tornou-se uma prioridade,
uma vez que elas representam um sério meio de aceeso à rede interna do carro. Atualmente, os mecanismos de
segurança de um carro focam-se na monitotização da rede CAN, deixando para trás as tecnologias referidas e não
contemplando os sistemas nĂŁo crĂticos. Como exemplo disso, o Bluetooth traz desafios diferentes da rede CAN,
uma vez que interage diretamente com o utilizador e está exposto a ataques externos.
Uma abordagem alternativa para tornar o automĂłvel num sistema mais robusto Ă© manter sob supervisĂŁo as
comunicações que com este são estabelecidas. Ao implementar uma detecção de intrusão baseada em anomalias,
esta dissertação visa analisar o protocolo Bluetooth no sentido de identificar interações anormais que possam
alertar para uma situação fora dos padrões de utilização. Em última análise, este produto de software embebido
incorpora uma grande margem de auto-aprendizagem, que é vital para enfrentar quaisquer ameaças desconhecidas
e aumentar os nĂveis de segurança globais. Ao longo deste documento, apresentamos o estudo do problema seguido
de uma metodologia alternativa que implementa um algoritmo baseado numa LSTM para prever a sequĂŞncia de
comandos HCI correspondentes a tráfego Bluetooth normal. Os resultados mostram a forma como esta abordagem
pode impactar a deteção de intrusões nestes ambientes ao demonstrar uma grande capacidade para identificar padrões anómalos no conjunto de dados considerado.In the last few years, hardware and software security have become a major concern. As the systems’ complexity
increases, its vulnerabilities to several sophisticated attack techniques have escalated likewise. Quite often, the
problem lies in the heterogeneity of the devices connected to the vehicle, making it difficult to converge the monitoring
systems of all existing protocols into one security product. Thereby, the market requires more refined tools to monitor
life-risky environments such as personal vehicles.
Considering that there are several ways to interact with the car’s infotainment system, such as Wi-fi, Bluetooth,
or CD player, the need to audit these interfaces has become a priority as they represent a serious channel to reach
the internal car network. Nowadays, security in car networks focuses on CAN bus monitoring, leaving behind the
aforementioned technologies and not contemplating other non-critical systems. As an example of these concerns,
Bluetooth brings different challenges compared to CAN as it interacts directly with the user, being exposed to external
attacks.
An alternative approach to converting modern vehicles and their set of computers into more robust systems
is to keep track of established communications with them. By enforcing anomaly-based intrusion detection this
dissertation aims to analyze the Bluetooth protocol to identify abnormal user interactions that may alert for a non conforming pattern. Ultimately, such embedded software product incorporates a self-learning edge, which is vital to
face newly developed threats and increasing global security levels. Throughout this document, we present the study
case followed by an alternative methodology that implements an LSTM based algorithm to predict a sequence of
HCI commands corresponding to normal Bluetooth traffic. The results show how this approach can impact intrusion
detection in such environments by expressing a high capability of identifying abnormal patterns in the considered
data
Serverless Cloud Computing: A Comparative Analysis of Performance, Cost, and Developer Experiences in Container-Level Services
Serverless cloud computing is a subset of cloud computing considerably adopted to build modern web applications, while the underlying server and infrastructure management duties are abstracted from customers to the cloud vendors. In serverless computing, customers must pay for the runtime consumed by their services, but they are exempt from paying for the idle time. Prior to serverless containers, customers needed to provision, scale, and manage servers, which was a bottleneck for rapidly growing customer-facing applications where latency and scaling were a concern.
The viability of adopting a serverless platform for a web application regarding performance, cost, and developer experiences is studied in this thesis. Three serverless container-level services are employed in this study from AWS and GCP. The services include GCP Cloud Run, GKE AutoPilot, and AWS EKS with AWS Fargate. Platform as a Service (PaaS) underpins the former, and Container as a Service (CaaS) the remainder. A single-page web application was created to perform incremental and spike load tests on those services to assess the performance differences. Furthermore, the cost differences are compared and analyzed. Lastly, the final element considered while evaluating the developer experiences is the complexity of using the services during the project implementation.
Based on the results of this research, it was determined that PaaS-based solutions are a high-performing, affordable alternative for CaaS-based solutions in circumstances where high levels of traffic are periodically anticipated, but sporadic latency is never a concern. Given that this study has limitations, the author recommends additional research to strengthen it
Adaptive Intelligent Systems for Extreme Environments
As embedded processors become powerful, a growing number of embedded systems equipped with artificial intelligence (AI) algorithms have been used in radiation environments to perform routine tasks to reduce radiation risk for human workers. On the one hand, because of the low price, commercial-off-the-shelf devices and components are becoming increasingly popular to make such tasks more affordable. Meanwhile, it also presents new challenges to improve radiation tolerance, the capability to conduct multiple AI tasks and deliver the power efficiency of the embedded systems in harsh environments. There are three aspects of research work that have been completed in this thesis: 1) a fast simulation method for analysis of single event effect (SEE) in integrated circuits, 2) a self-refresh scheme to detect and correct bit-flips in random access memory (RAM), and 3) a hardware AI system with dynamic hardware accelerators and AI models for increasing flexibility and efficiency.
The variances of the physical parameters in practical implementation, such as the nature of the particle, linear energy transfer and circuit characteristics, may have a large impact on the final simulation accuracy, which will significantly increase the complexity and cost in the workflow of the transistor level simulation for large-scale circuits. It makes it difficult to conduct SEE simulations for large-scale circuits. Therefore, in the first research work, a new SEE simulation scheme is proposed, to offer a fast and cost-efficient method to evaluate and compare the performance of large-scale circuits which subject to the effects of radiation particles. The advantages of transistor and hardware description language (HDL) simulations are combined here to produce accurate SEE digital error models for rapid error analysis in large-scale circuits. Under the proposed scheme, time-consuming back-end steps are skipped. The SEE analysis for large-scale circuits can be completed in just few hours.
In high-radiation environments, bit-flips in RAMs can not only occur but may also be accumulated. However, the typical error mitigation methods can not handle high error rates with low hardware costs. In the second work, an adaptive scheme combined with correcting codes and refreshing techniques is proposed, to correct errors and mitigate error accumulation in extreme radiation environments. This scheme is proposed to continuously refresh the data in RAMs so that errors can not be accumulated. Furthermore, because the proposed design can share the same ports with the user module without changing the timing sequence, it thus can be easily applied to the system where the hardware modules are designed with fixed reading and writing latency.
It is a challenge to implement intelligent systems with constrained hardware resources. In the third work, an adaptive hardware resource management system for multiple AI tasks in harsh environments was designed. Inspired by the “refreshing” concept in the second work, we utilise a key feature of FPGAs, partial reconfiguration, to improve the reliability and efficiency of the AI system. More
importantly, this feature provides the capability to manage the hardware resources for deep learning acceleration. In the proposed design, the on-chip hardware resources are dynamically managed to improve the flexibility, performance and power efficiency of deep learning inference systems. The deep learning units provided by Xilinx are used to perform multiple AI tasks simultaneously, and the experiments show significant improvements in power efficiency for a wide range of scenarios with different workloads. To further improve the performance of the system, the concept of reconfiguration was further extended. As a result, an adaptive DL software framework was designed. This framework can provide a significant level of adaptability support for various deep learning algorithms on an FPGA-based edge computing platform. To meet the specific accuracy and latency requirements derived from the running applications and operating environments, the platform may dynamically update hardware and software (e.g., processing pipelines) to achieve better cost, power, and processing efficiency compared to the static system
Lux junior 2023: 16. Internationales Forum für den lichttechnischen Nachwuchs, 23. – 25. Juni 2023, Ilmenau : Tagungsband
Während des 16. Internationales Forums für den lichttechnischen Nachwuchs präsentieren Studenten, Doktoranden und junge Absolventen ihre Forschungs- und Entwicklungsergebnisse aus allen Bereichen der Lichttechnik. Die Themen bewegen sich dabei von Beleuchtungsanwendungen in verschiedensten Bereichen über Lichtmesstechnik, Kraftfahrzeugbeleuchung, LED-Anwendung bis zu nichtvisuellen Lichtwirkungen. Das Forum ist speziell für Studierende und junge Absolventen des Lichtbereiches konzipiert. Es bietet neben den Vorträgen und Postern die Möglichkeit zu Diskussionen und individuellem Austausch. In den 30 Jahren ihres Bestehens entwickelte sich die zweijährig stattfindende Tagung zu eine Traditionsveranstaltung, die das Fachgebiet Lichttechnik der TU Ilmenau gemeinsam mit der Bezirksgruppe Thüringen-Nordhessen der Deutschen Lichttechnischen Gesellschaft LiTG e. V. durchführt
Recommended from our members
Utilizing Runtime Information for Accurate Root Cause Identification in Performance Diagnosis
This dissertation highlights that existing performance diagnostic tools often become less effective due to their inherent inaccuracies in modern software. To overcome these inaccuracies and effectively identify the root causes of performance issues, it is necessary to incorporate supplementary runtime information into these tools. Within this context, the dissertation integrates specific runtime information into two typical performance diagnostic tools: profilers and causal tracing tools.
The integration yields a substantial enhancement in the effectiveness of performance diagnosis. Among these tools, gprof stands out as a representative profiler for performance diagnosis. Nonetheless, its effectiveness diminishes as the time cost calculated based on CPU sampling fails to accurately and adequately pinpoint the root causes of performance issues in complex software. To tackle this challenge, the dissertation introduces an innovative methodology called value-assisted cost profiling (vProf). This approach incorporates variable values observed during runtime into the profiling process.
By continuously sampling variable values from both normal and problematic executions, vProf refines function cost estimates, identifies anomalies in value distributions, and highlights potentially problematic code areas that could be the actual sources of performance is- sues. The effectiveness of vProf is validated through the diagnosis of 18 real-world performance is- sues in four widely-used applications. Remarkably, vProf outperforms other state-of-the-art tools, successfully diagnosing all issues, including three that had remained unresolved for over four years.
Causal tracing tools reveal the root causes of performance issues in complex software by generating tracing graphs. However, these graphs often suffer from inherent inaccuracies, characterized by superfluous (over-connected) and missed (under-connected) edges. These inaccuracies arise from the diversity of programming paradigms. To mitigate the inaccuracies, the dissertation proposes an approach to derive strong and weak edges in tracing graphs based on the vertices’ semantics collected during runtime. By leveraging these edge types, a beam-search-based diagnostic algorithm is employed to identify the most probable causal paths. Causal paths from normal and buggy executions are differentiated to provide key insights into the root causes of performance issues. To validate this approach, a causal tracing tool named Argus is developed and tested across multiple versions of macOS. It is evaluated on 12 well-known spinning pinwheel issues in popular macOS applications. Notably, Argus successfully diagnoses the root causes of all identified issues, including 10 issues that had remained unresolved for several years.
The results from both tools exemplify a substantial enhancement of performance diagnostic tools achieved by harnessing runtime information. The integration can effectively mitigate inherent inaccuracies, lend support to inaccuracy-tolerant diagnostic algorithms, and provide key insights to pinpoint the root causes
- …