259 research outputs found
Model-based Resource Management for Fine-grained Services
Brief Biography: Alim Ul Gias is currently a Research Associate at the Centre for Parallel Computing (CPC), University of Westminster. He completed his PhD from Imperial College London in 2022. Before starting his PhD, Alim was a lecturer at Institute of Information Technology (IIT), University of Dhaka (DU). He completed his bachelor's and master's program from the same institute. His current research focuses on different Quality of Service (QoS) aspects of cloud-native applications e.g., microservices. In particular, he aims to address the performance and resource management challenges concenrining the microservices architecture
Recommended from our members
Model-based resource management for fine-grained services
The emergence of DevOps has changed the way modern distributed software systems are developed. Architectures decomposed in fine-grained services, such as microservices or function-as-a-service (FaaS), are now widespread across many organizations. From a resource management perspective, although the systems built with such architectures have many benefits, there are still research challenges that need further attention. In this study, we have focused on three such challenges, each concerning a specific system resource: compute, memory, or storage. Firstly, we focus on scaling the capacity of microservices at runtime. Here, the challenge is to design an autoscaler that can decide between vertical and horizontal scaling options to distribute the CPU capacity. Secondly, we focus on estimating the required capacity of an on-premises FaaS platform such that the service level agreements (SLAs) for function response times are satisfied. The challenge here is to address the cold start dilemma, i.e., that a cold start delays a function response but reduces the memory consumption. Thus, we must find a limit of cold starts such that the memory-consumption remains in-check while satisfying the SLAs. Finally, we focus on the storage management for distributed tracing targeted at microservices. The volume of such traces generated in a data center can be in the scale of tens of terabytes per day, but only a small fraction of these traces is useful for troubleshooting. The objective then is to sample only the useful traces. The key to addressing all these challenges is first, modeling the dynamics concerning the resources and subsequently, leveraging the model in a resource controller. To address the first challenge, we have developed an autoscaler ATOM that leverages layered queueing network (LQN) models to take its scaling decisions. Our experiment, with a real-life application, shows that ATOM produces 30-37% better results than the baseline autoscalers. For the second challenge, we have developed COCOA, a cold start aware capacity planner. COCOA utilizes M/M/k setup and LQN models to assess the cold start scenario and estimate the required capacity. We show with simulation that COCOA can reduce over-provisioning by over 70% compared to the availability aware approaches. Finally, addressing the third challenge, we propose SampleHST, a trace sampler that works under a storage budget constraint. SampleHST relies on either bag of words or graph-based models to represent a trace and groups similar traces using online clustering to perform sampling. We have evaluated the performance of SampleHST using data from both literature and production, which shows it produces 1.2x to 19x better results than the state-of-the-art.Open Acces
Performance analysis of Discrete Cosine Transform in Multibeamforming
Aperture arrays are widely used in beamforming applications where element signals are steered to a particular direction of interest and a single beam is formed. Multibeamforming is an extension of single beamforming, which is desired in the fields where sources located in multiple directions are of interest. Discrete Fourier Transform (DFT) is usually used in these scenarios to
segregate the received signals based on their direction of arrivals. In case of broadband signals,
DFT of the data at each sensor of an array decomposes the signal into multiple narrowband signals. However, if hardware cost and implementation complexity are of concern while maintaining the desired performance, Discrete Cosine Transform (DCT) outperforms DFT.
In this work, instead of DFT, the Discrete Cosine Transform (DCT) is used to decompose
the received signal into multiple beams into multiple directions. DCT offers simple and efficient hardware implementation. Also, while low frequency signals are of interest, DCT can process correlated data and perform close to the ideal Karhunen-Loeve Transform (KLT).
To further improve the accuracy and reduce the implementation cost, an efficient technique using Algebraic Integer Quantization (AIQ) of the DCT is presented. Both 8-point and
16-point versions of DCT using AIQ mapping have been presented and their performance is
analyzed in terms of accuracy and hardware complexity. It has been shown that the proposed AIQ DCT offers considerable savings in hardware compared to DFT and classical DCT while maintaining the same accuracy of beam steering in multibeamforming application
A Study of Negative Pion Photoproduction in Complex Nuclei by the Analysis of Activation Products
Abstract Not Provided
Does Collaborative Editing Help Mitigate Security Vulnerabilities in Crowd-Shared IoT Code Examples?
Background: With the proliferation of crowd-sourced developer forums,
software developers are increasingly sharing more coding solutions to
programming problems with others in forums. The decentralized nature of
knowledge sharing on sites has raised the concern of sharing security
vulnerable code, which then can be reused into mission critical software
systems - making those systems vulnerable in the process. Collaborative editing
has been introduced in forums like Stack Overflow to improve the quality of the
shared contents. Aim: In this paper, we investigate whether code editing can
mitigate shared vulnerable code examples by analyzing IoT code snippets and
their revisions in three Stack Exchange sites: Stack Overflow, Arduino, and
Raspberry Pi. Method:We analyze the vulnerabilities present in shared IoT C/C++
code snippets, as C/C++ is one of the most widely used languages in
mission-critical devices and low-powered IoT devices. We further analyse the
revisions made to these code snippets, and their effects. Results: We find
several vulnerabilities such as CWE 788 - Access of Memory Location After End
of Buffer, in 740 code snippets . However, we find the vast majority of posts
are not revised, or revisions are not made to the code snippets themselves (598
out of 740). We also find that revisions are most likely to result in no change
to the number of vulnerabilities in a code snippet rather than deteriorating or
improving the snippet. Conclusions: We conclude that the current collaborative
editing system in the forums may be insufficient to help mitigate
vulnerabilities in the shared code.Comment: 10 pages, 14 figures, ESEM2
Automatic Prediction of Rejected Edits in Stack Overflow
The content quality of shared knowledge in Stack Overflow (SO) is crucial in
supporting software developers with their programming problems. Thus, SO allows
its users to suggest edits to improve the quality of a post (i.e., question and
answer). However, existing research shows that many suggested edits in SO are
rejected due to undesired contents/formats or violating edit guidelines. Such a
scenario frustrates or demotivates users who would like to conduct good-quality
edits. Therefore, our research focuses on assisting SO users by offering them
suggestions on how to improve their editing of posts. First, we manually
investigate 764 (382 questions + 382 answers) rejected edits by rollbacks and
produce a catalog of 19 rejection reasons. Second, we extract 15 texts and
user-based features to capture those rejection reasons. Third, we develop four
machine learning models using those features. Our best-performing model can
predict rejected edits with 69.1% precision, 71.2% recall, 70.1% F1-score, and
69.8% overall accuracy. Fourth, we introduce an online tool named EditEx that
works with the SO edit system. EditEx can assist users while editing posts by
suggesting the potential causes of rejections. We recruit 20 participants to
assess the effectiveness of EditEx. Half of the participants (i.e., treatment
group) use EditEx and another half (i.e., control group) use the SO standard
edit system to edit posts. According to our experiment, EditEx can support SO
standard edit system to prevent 49% of rejected edits, including the commonly
rejected ones. However, it can prevent 12% rejections even in free-form regular
edits. The treatment group finds the potential rejection reasons identified by
EditEx influential. Furthermore, the median workload suggesting edits using
EditEx is half compared to the SO edit system.Comment: Accepted for publication in Empirical Software Engineering (EMSE)
journa
- …