6,917 research outputs found
Modern computing: Vision and challenges
Over the past six decades, the computing systems field has experienced significant transformations, profoundly impacting society with transformational developments, such as the Internet and the commodification of computing. Underpinned by technological advancements, computer systems, far from being static, have been continuously evolving and adapting to cover multifaceted societal niches. This has led to new paradigms such as cloud, fog, edge computing, and the Internet of Things (IoT), which offer fresh economic and creative opportunities. Nevertheless, this rapid change poses complex research challenges, especially in maximizing potential and enhancing functionality. As such, to maintain an economical level of performance that meets ever-tighter requirements, one must understand the drivers of new model emergence and expansion, and how contemporary challenges differ from past ones. To that end, this article investigates and assesses the factors influencing the evolution of computing systems, covering established systems and architectures as well as newer developments, such as serverless computing, quantum computing, and on-device AI on edge devices. Trends emerge when one traces technological trajectory, which includes the rapid obsolescence of frameworks due to business and technical constraints, a move towards specialized systems and models, and varying approaches to centralized and decentralized control. This comprehensive review of modern computing systems looks ahead to the future of research in the field, highlighting key challenges and emerging trends, and underscoring their importance in cost-effectively driving technological progress
Evaluation Methodologies in Software Protection Research
Man-at-the-end (MATE) attackers have full control over the system on which
the attacked software runs, and try to break the confidentiality or integrity
of assets embedded in the software. Both companies and malware authors want to
prevent such attacks. This has driven an arms race between attackers and
defenders, resulting in a plethora of different protection and analysis
methods. However, it remains difficult to measure the strength of protections
because MATE attackers can reach their goals in many different ways and a
universally accepted evaluation methodology does not exist. This survey
systematically reviews the evaluation methodologies of papers on obfuscation, a
major class of protections against MATE attacks. For 572 papers, we collected
113 aspects of their evaluation methodologies, ranging from sample set types
and sizes, over sample treatment, to performed measurements. We provide
detailed insights into how the academic state of the art evaluates both the
protections and analyses thereon. In summary, there is a clear need for better
evaluation methodologies. We identify nine challenges for software protection
evaluations, which represent threats to the validity, reproducibility, and
interpretation of research results in the context of MATE attacks
Beam scanning by liquid-crystal biasing in a modified SIW structure
A fixed-frequency beam-scanning 1D antenna based on Liquid Crystals (LCs) is designed for application in 2D scanning with lateral alignment. The 2D array environment imposes full decoupling of adjacent 1D antennas, which often conflicts with the LC requirement of DC biasing: the proposed design accommodates both. The LC medium is placed inside a Substrate Integrated Waveguide (SIW) modified to work as a Groove Gap Waveguide, with radiating slots etched on the upper broad wall, that radiates as a Leaky-Wave Antenna (LWA). This allows effective application of the DC bias voltage needed for tuning the LCs. At the same time, the RF field remains laterally confined, enabling the possibility to lay several antennas in parallel and achieve 2D beam scanning. The design is validated by simulation employing the actual properties of a commercial LC medium
Data driven approaches for smart city planning and design: a case scenario on urban data management
Purpose
Because of the use of digital technologies in smart cities, municipalities are increasingly facing issues related to urban data management and are seeking ways to exploit these huge amounts of data for the actualization of data driven services. However, only few studies discuss challenges related to data driven strategies in smart cities. Accordingly, the purpose of this study is to present data driven approaches (architecture and model), for urban data management needed to improve smart city planning and design. The developed approaches depict how data can underpin sustainable urban development.
Design/methodology/approach
Design science research is adopted following a qualitative method to evaluate the architecture developed based on top-level design using a case data from workshops and interviews with experts involved in a smart city project.
Findings
The findings of this study from the evaluations indicate that the identified enablers are useful to support data driven services in smart cities and the developed architecture can be used to promote urban data management. More importantly, findings from this study provide guidelines to municipalities to improve data driven services for smart city planning and design.
Research limitations/implications
Feedback as qualitative data from practitioners provided evidence on how data driven strategies can be achieved in smart cities. However, the model is not validated. Hence, quantitative data is needed to further validate the enablers that influence data driven services in smart city planning and design.
Practical implications
Findings from this study offer practical insights and real-life evidence to define data driven enablers in smart cities and suggest research propositions for future studies. Additionally, this study develops a real conceptualization of data driven method for municipalities to foster open data and digital service innovation for smart city development.
Social implications
The main findings of this study suggest that data governance, interoperability, data security and risk assessment influence data driven services in smart cities. This study derives propositions based on the developed model that identifies enablers for actualization of data driven services for smart cities planning and design.
Originality/value
This study explores the enablers of data driven strategies in smart city and further developed an architecture and model that can be adopted by municipalities to structure their urban data initiatives for improving data driven services to make cities smarter. The developed model supports municipalities to manage data used from different sources to support the design of data driven services provided by different enterprises that collaborate in urban environment.acceptedVersio
Towards addressing training data scarcity challenge in emerging radio access networks: a survey and framework
The future of cellular networks is contingent on artificial intelligence (AI) based automation, particularly for radio access network (RAN) operation, optimization, and troubleshooting. To achieve such zero-touch automation, a myriad of AI-based solutions are being proposed in literature to leverage AI for modeling and optimizing network behavior to achieve the zero-touch automation goal. However, to work reliably, AI based automation, requires a deluge of training data. Consequently, the success of the proposed AI solutions is limited by a fundamental challenge faced by cellular network research community: scarcity of the training data. In this paper, we present an extensive review of classic and emerging techniques to address this challenge. We first identify the common data types in RAN and their known use-cases. We then present a taxonomized survey of techniques used in literature to address training data scarcity for various data types. This is followed by a framework to address the training data scarcity. The proposed framework builds on available information and combination of techniques including interpolation, domain-knowledge based, generative adversarial neural networks, transfer learning, autoencoders, fewshot learning, simulators and testbeds. Potential new techniques to enrich scarce data in cellular networks are also proposed, such as by matrix completion theory, and domain knowledge-based techniques leveraging different types of network geometries and network parameters. In addition, an overview of state-of-the art simulators and testbeds is also presented to make readers aware of current and emerging platforms to access real data in order to overcome the data scarcity challenge. The extensive survey of training data scarcity addressing techniques combined with proposed framework to select a suitable technique for given type of data, can assist researchers and network operators in choosing the appropriate methods to overcome the data scarcity challenge in leveraging AI to radio access network automation
QoS-aware architectures, technologies, and middleware for the cloud continuum
The recent trend of moving Cloud Computing capabilities to the Edge of the network is reshaping how applications and their middleware supports are designed, deployed, and operated. This new model envisions a continuum of virtual resources between the traditional cloud and the network edge, which is potentially more suitable to meet the heterogeneous Quality of Service (QoS) requirements of diverse application domains and next-generation applications. Several classes of advanced Internet of Things (IoT) applications, e.g., in the industrial manufacturing domain, are expected to serve a wide range of applications with heterogeneous QoS requirements and call for QoS management systems to guarantee/control performance indicators, even in the presence of real-world factors such as limited bandwidth and concurrent virtual resource utilization. The present dissertation proposes a comprehensive QoS-aware architecture that addresses the challenges of integrating cloud infrastructure with edge nodes in IoT applications. The architecture provides end-to-end QoS support by incorporating several components for managing physical and virtual resources. The proposed architecture features: i) a multilevel middleware for resolving the convergence between Operational Technology (OT) and Information Technology (IT), ii) an end-to-end QoS management approach compliant with the Time-Sensitive Networking (TSN) standard, iii) new approaches for virtualized network environments, such as running TSN-based applications under Ultra-low Latency (ULL) constraints in virtual and 5G environments, and iv) an accelerated and deterministic container overlay network architecture. Additionally, the QoS-aware architecture includes two novel middlewares: i) a middleware that transparently integrates multiple acceleration technologies in heterogeneous Edge contexts and ii) a QoS-aware middleware for Serverless platforms that leverages coordination of various QoS mechanisms and virtualized Function-as-a-Service (FaaS) invocation stack to manage end-to-end QoS metrics. Finally, all architecture components were tested and evaluated by leveraging realistic testbeds, demonstrating the efficacy of the proposed solutions
Recommended from our members
Neural Decoding Leveraging Motor-Cortex Population Geometry
Intracortical brain-computer interfaces (BCIs) provide the means to do something extraordinary: restore movement to patients with paralysis or amputated limbs. Realizing this potential requires the development of decode algorithms capable of accurately translating measurements of neural activity, in real time, into appropriate time-varying commands for an external device (e.g. prosthetic limb).
This problem is fundamentally interdisciplinary, drawing on tools and insights from engineering, neuroscience, statistics, and computer science, among others. Decode algorithms that have been favored historically tend to be computationally efficient, but perform suboptimally, likely because their assumptions fail to fully and accurately capture the complexity in neural population responses. Recent work harnessing the power of contemporary machine learning methods has raised the performance bar, yet these methods can be computationally demanding and it is unclear what properties of neural and/or behavioral data they exploit. In this dissertation, we characterize properties of motor-cortex population geometry and let these properties dictate decoder design, resulting in methods that perform very well, yet retain the benefits of simpler methods.
We use this approach to develop a closed-loop navigation BCI, and to design a highly accurate, general, and interpretable decoder. The properties described in this dissertation have implications for any BCI. By designing decoders to explicitly respect (and leverage) these properties, we can construct powerful yet practical BCIs that better meet the needs of patients
- …