11,078 research outputs found
Evaluation Methodologies in Software Protection Research
Man-at-the-end (MATE) attackers have full control over the system on which
the attacked software runs, and try to break the confidentiality or integrity
of assets embedded in the software. Both companies and malware authors want to
prevent such attacks. This has driven an arms race between attackers and
defenders, resulting in a plethora of different protection and analysis
methods. However, it remains difficult to measure the strength of protections
because MATE attackers can reach their goals in many different ways and a
universally accepted evaluation methodology does not exist. This survey
systematically reviews the evaluation methodologies of papers on obfuscation, a
major class of protections against MATE attacks. For 572 papers, we collected
113 aspects of their evaluation methodologies, ranging from sample set types
and sizes, over sample treatment, to performed measurements. We provide
detailed insights into how the academic state of the art evaluates both the
protections and analyses thereon. In summary, there is a clear need for better
evaluation methodologies. We identify nine challenges for software protection
evaluations, which represent threats to the validity, reproducibility, and
interpretation of research results in the context of MATE attacks
An Online Resource Scheduling for Maximizing Quality-of-Experience in Meta Computing
Meta Computing is a new computing paradigm, which aims to solve the problem
of computing islands in current edge computing paradigms and integrate all the
resources on a network by incorporating cloud, edge, and particularly
terminal-end devices. It throws light on solving the problem of lacking
computing power. However, at this stage, due to technical limitations, it is
impossible to integrate the resources of the whole network. Thus, we create a
new meta computing architecture composed of multiple meta computers, each of
which integrates the resources in a small-scale network. To make meta computing
widely applied in society, the service quality and user experience of meta
computing cannot be ignored. Consider a meta computing system providing
services for users by scheduling meta computers, how to choose from multiple
meta computers to achieve maximum Quality-of-Experience (QoE) with limited
budgets especially when the true expected QoE of each meta computer is not
known as a priori? The existing studies, however, usually ignore the costs and
budgets and barely consider the ubiquitous law of diminishing marginal utility.
In this paper, we formulate a resource scheduling problem from the perspective
of the multi-armed bandit (MAB). To determine a scheduling strategy that can
maximize the total QoE utility under a limited budget, we propose an upper
confidence bound (UCB) based algorithm and model the utility of service by
using a concave function of total QoE to characterize the marginal utility in
the real world. We theoretically upper bound the regret of our proposed
algorithm with sublinear growth to the budget. Finally, extensive experiments
are conducted, and the results indicate the correctness and effectiveness of
our algorithm
Multitenant Containers as a Service (CaaS) for Clouds and Edge Clouds
Cloud computing, offering on-demand access to computing resources through the
Internet and the pay-as-you-go model, has marked the last decade with its three
main service models; Infrastructure as a Service (IaaS), Platform as a Service
(PaaS), and Software as a Service (SaaS). The lightweight nature of containers
compared to virtual machines has led to the rapid uptake of another in recent
years, called Containers as a Service (CaaS), which falls between IaaS and PaaS
regarding control abstraction. However, when CaaS is offered to multiple
independent users, or tenants, a multi-instance approach is used, in which each
tenant receives its own separate cluster, which reimposes significant overhead
due to employing virtual machines for isolation. If CaaS is to be offered not
just at the cloud, but also at the edge cloud, where resources are limited,
another solution is required. We introduce a native CaaS multitenancy
framework, meaning that tenants share a cluster, which is more efficient than
the one tenant per cluster model. Whenever there are shared resources,
isolation of multitenant workloads is an issue. Such workloads can be isolated
by Kata Containers today. Besides, our framework esteems the application
requirements that compel complete isolation and a fully customized environment.
Node-level slicing empowers tenants to programmatically reserve isolated
subclusters where they can choose the container runtime that suits application
needs. The framework is publicly available as liberally-licensed, free,
open-source software that extends Kubernetes, the de facto standard container
orchestration system. It is in production use within the EdgeNet testbed for
researchers
History, Features, Challenges, and Critical Success Factors of Enterprise Resource Planning (ERP) in The Era of Industry 4.0
ERP has been adopting newer features over the last several decades and shaping global businesses with the advent of newer technologies. This research article uses a state-of-the-art review method with the purpose to review and synthesize the latest information on the possible integration of potential Industry 4.0 technologies into the future development of ERP. Different software that contributed to the development of the existing ERP is found to be Material Requirement Planning (MRP), Manufacturing Resource Planning (MRPII), and Computer Integrated Manufacturing (CIM). Potential disruptive Industry 4.0 technologies that are featured to be integrated into future ERP are artificial intelligence, business intelligence, the internet of things, big data, blockchain technology, and omnichannel strategy. Notable Critical Success Factors of ERP have been reported to be top management support, project team, IT infrastructure, communication, skilled staff, training & education, and monitoring & evaluation. Moreover, cybersecurity has been found to be the most challenging issue to overcome in future versions of ERP. This review article could help future ERP researchers and respective stakeholders contribute to integrating newer features in future versions of ERP
The Metaverse: Survey, Trends, Novel Pipeline Ecosystem & Future Directions
The Metaverse offers a second world beyond reality, where boundaries are
non-existent, and possibilities are endless through engagement and immersive
experiences using the virtual reality (VR) technology. Many disciplines can
benefit from the advancement of the Metaverse when accurately developed,
including the fields of technology, gaming, education, art, and culture.
Nevertheless, developing the Metaverse environment to its full potential is an
ambiguous task that needs proper guidance and directions. Existing surveys on
the Metaverse focus only on a specific aspect and discipline of the Metaverse
and lack a holistic view of the entire process. To this end, a more holistic,
multi-disciplinary, in-depth, and academic and industry-oriented review is
required to provide a thorough study of the Metaverse development pipeline. To
address these issues, we present in this survey a novel multi-layered pipeline
ecosystem composed of (1) the Metaverse computing, networking, communications
and hardware infrastructure, (2) environment digitization, and (3) user
interactions. For every layer, we discuss the components that detail the steps
of its development. Also, for each of these components, we examine the impact
of a set of enabling technologies and empowering domains (e.g., Artificial
Intelligence, Security & Privacy, Blockchain, Business, Ethics, and Social) on
its advancement. In addition, we explain the importance of these technologies
to support decentralization, interoperability, user experiences, interactions,
and monetization. Our presented study highlights the existing challenges for
each component, followed by research directions and potential solutions. To the
best of our knowledge, this survey is the most comprehensive and allows users,
scholars, and entrepreneurs to get an in-depth understanding of the Metaverse
ecosystem to find their opportunities and potentials for contribution
One Small Step for Generative AI, One Giant Leap for AGI: A Complete Survey on ChatGPT in AIGC Era
OpenAI has recently released GPT-4 (a.k.a. ChatGPT plus), which is
demonstrated to be one small step for generative AI (GAI), but one giant leap
for artificial general intelligence (AGI). Since its official release in
November 2022, ChatGPT has quickly attracted numerous users with extensive
media coverage. Such unprecedented attention has also motivated numerous
researchers to investigate ChatGPT from various aspects. According to Google
scholar, there are more than 500 articles with ChatGPT in their titles or
mentioning it in their abstracts. Considering this, a review is urgently
needed, and our work fills this gap. Overall, this work is the first to survey
ChatGPT with a comprehensive review of its underlying technology, applications,
and challenges. Moreover, we present an outlook on how ChatGPT might evolve to
realize general-purpose AIGC (a.k.a. AI-generated content), which will be a
significant milestone for the development of AGI.Comment: A Survey on ChatGPT and GPT-4, 29 pages. Feedback is appreciated
([email protected]
Теорія систем мобільних інфокомунікацій. Системна архітектура
Навчальний посібник містить опис логічних та фізичних структур, процедур,
алгоритмів, протоколів, принципів побудови і функціонування мереж
стільникового мобільного зв’язку (до 3G) і мобільних інфокомунікацій (4G і вище),
приділяючи увагу розгляду загальних архітектур мереж операторів мобільного
зв’язку, їх управління і координування, неперервності еволюції розвитку засобів
функціонування і способів надання послуг таких мереж. Посібник структурно має
сім розділів і побудований так, що складність матеріалу зростає з кожним
наступним розділом. Навчальний посібник призначено для здобувачів ступеня
бакалавра за спеціальністю 172 «Телекомунікації та радіотехніка», буде також
корисним для аспірантів, наукових та інженерно-технічних працівників за
напрямом інформаційно-телекомунікаційних систем та технологій.The manual contains a description of the logical and physical structures, procedures, algorithms, protocols, principles of construction and operation of cellular networks for mobile communications (up to 3G) and mobile infocommunications (4G and higher), paying attention to the consideration of general architectures of mobile operators' networks, their management, and coordination, the continuous evolution of the development of the means of operation and methods of providing services of such networks. The manual has seven structural sections and is structured in such a way that the complexity of the material increases with each subsequent chapter. The textbook is intended for applicants for a bachelor's degree in specialty 172 "Telecommunications and Radio Engineering", and will also be useful to graduate students, and scientific and engineering workers in the direction of information and telecommunication systems and technologies
Reinforcement Learning-based User-centric Handover Decision-making in 5G Vehicular Networks
The advancement of 5G technologies and Vehicular Networks open a new paradigm for Intelligent Transportation Systems (ITS) in safety and infotainment services in urban and highway scenarios. Connected vehicles are vital for enabling massive data sharing and supporting such services. Consequently, a stable connection is compulsory to transmit data across the network successfully. The new 5G technology introduces more bandwidth, stability, and reliability, but it faces a low communication range, suffering from more frequent handovers and connection drops. The shift from the base station-centric view to the user-centric view helps to cope with the smaller communication range and ultra-density of 5G networks. In this thesis, we propose a series of strategies to improve connection stability through efficient handover decision-making. First, a modified probabilistic approach, M-FiVH, aimed at reducing 5G handovers and enhancing network stability. Later, an adaptive learning approach employed Connectivity-oriented SARSA Reinforcement Learning (CO-SRL) for user-centric Virtual Cell (VC) management to enable efficient handover (HO) decisions. Following that, a user-centric Factor-distinct SARSA Reinforcement Learning (FD-SRL) approach combines time series data-oriented LSTM and adaptive SRL for VC and HO management by considering both historical and real-time data. The random direction of vehicular movement, high mobility, network load, uncertain road traffic situation, and signal strength from cellular transmission towers vary from time to time and cannot always be predicted. Our proposed approaches maintain stable connections by reducing the number of HOs by selecting the appropriate size of VCs and HO management. A series of improvements demonstrated through realistic simulations showed that M-FiVH, CO-SRL, and FD-SRL were successful in reducing the number of HOs and the average cumulative HO time. We provide an analysis and comparison of several approaches and demonstrate our proposed approaches perform better in terms of network connectivity
Associated Random Neural Networks for Collective Classification of Nodes in Botnet Attacks
Botnet attacks are a major threat to networked systems because of their
ability to turn the network nodes that they compromise into additional
attackers, leading to the spread of high volume attacks over long periods. The
detection of such Botnets is complicated by the fact that multiple network IP
addresses will be simultaneously compromised, so that Collective Classification
of compromised nodes, in addition to the already available traditional methods
that focus on individual nodes, can be useful. Thus this work introduces a
collective Botnet attack classification technique that operates on traffic from
an n-node IP network with a novel Associated Random Neural Network (ARNN) that
identifies the nodes which are compromised. The ARNN is a recurrent
architecture that incorporates two mutually associated, interconnected and
architecturally identical n-neuron random neural networks, that act
simultneously as mutual critics to reach the decision regarding which of n
nodes have been compromised. A novel gradient learning descent algorithm is
presented for the ARNN, and is shown to operate effectively both with
conventional off-line training from prior data, and with on-line incremental
training without prior off-line learning. Real data from a 107 node packet
network is used with over 700,000 packets to evaluate the ARNN, showing that it
provides accurate predictions. Comparisons with other well-known state of the
art methods using the same learning and testing datasets, show that the ARNN
offers significantly better performance
High stakes online assessments: A case study of National Benchmark Tests during COVID-19
Owing to the COVID-19 pandemic, paper-based delivery of the National Benchmark Tests (NBTs) was not possible during the 2020 testing cycle. The NBTs, being a large-scale national assessment project, did not have alternative options, other than to offer the tests online. Moving these high-stakes tests online meant that certain considerations had to be considered to retain the credibility and security of the tests, without compromising the validity and reliability of the scores. Digitising the paper-based NBTs required an innovative, flexible and robust solution, which promotes fairness and ensures the quality of testing is maintained, while in many ways remains comparable to the paper-based implementation. To deliver the NBTs online, the following important considerations needed to be addressed: test security and integrity, test candidate identification processes, the prevention of dishonest behaviour, test scheduling and timing and technical support. The online testing solution chosen integrates the following aspects: it 1) enables all candidates to take the same test at the same time; 2) ensures the quality and similarity in experience of test delivery for all candidates as far as possible; 3) prevents candidates from accessing other applications and devices during the test; 4) enables proctoring before, during and after the tests to encourage appropriate behaviour similar to that expected during paper-based tests; 5) provides live support to assist candidates to deal with technical challenges and to guide them through the test sessions and 6) processes and presents data and scores in the same way as for the paper-based tests. In this article, we analyse the integration and complexity of the online NBTs solution, the opportunities and challenges associated with this form of delivery and reflect on test candidates’ and the team’s experiences. We discuss components of online assessment and wish to argue that this is also relevant to high-stakes course assessments. This case study should help to refine the scope of further research and development in the use of online high-stakes assessments
- …