129 research outputs found

    Future Mobile Communications: LTE Optimization and Mobile Network Virtualization

    Get PDF
    Providing QoS while optimizing the LTE network in a cost efficient manner is very challenging. Thus, radio scheduling is one of the most important functions in mobile broadband networks. The design of a mobile network radio scheduler holds several objectives that need to be satisfied, for example: the scheduler needs to maximize the radio performance by efficiently distributing the limited radio resources, since the operator's revenue depends on it. In addition, the scheduler has to guarantee the user's demands in terms of their Quality of Service (QoS). Thus, the design of an effective scheduler is rather a complex task. In this thesis, the author proposes the design of a radio scheduler that is optimized towards QoS guarantees and system performance optimization. The proposed scheduler is called Optimized Service Aware Scheduler (OSA). The OSA scheduler is tested and analyzed in several scenarios, and is compared against other well-known schedulers. A novel wireless network virtualization framework is also proposed in this thesis. The framework targets the concepts of wireless virtualization applied within the 3GPP Long Term Evolution (LTE) system. LTE represents one of the new mobile communication systems that is just entering the market. Therefore, LTE was chosen as a case study to demonstrate the proposed wireless virtualization framework. The framework is implemented in the LTE network simulator and analyzed, highlighting the many advantages and potential gain that the virtualization process can achieve. Two potential gain scenarios that can result from using network virtualization in LTE systems are analyzed: Multiplexing gain coming from spectrum sharing, and multi-user diversity gain. Several LTE radio analytical models, based on Continuous Time Markov Chains (CTMC) are designed and developed in this thesis. These models target the modeling of three different time domain radio schedulers: Maximum Throughput (MaxT), Blind Equal Throughput (BET), and Optimized Service Aware Scheduler (OSA). The models are used to obtain faster results (i.e., in a very short time period in the order of seconds to minutes), compared to the simulation results that can take considerably longer periods, such as hours or sometimes even days. The model results are also compared against the simulation results, and it is shown that it provides a good match. Thus, it can be used for fast radio dimensioning purposes. Overall, the concepts, investigations, and the analytical models presented in this thesis can help mobile network operators to optimize their radio network and provide the necessary means to support services QoS differentiations and guarantees. In addition, the network virtualization concepts provides an excellent tool that can enable the operators to share their resources and reduce their cost, as well as provides good chances for smaller operators to enter the market

    Google Scholar is manipulatable

    Full text link
    Citations are widely considered in scientists' evaluation. As such, scientists may be incentivized to inflate their citation counts. While previous literature has examined self-citations and citation cartels, it remains unclear whether scientists can purchase citations. Here, we compile a dataset of ~1.6 million profiles on Google Scholar to examine instances of citation fraud on the platform. We survey faculty at highly-ranked universities, and confirm that Google Scholar is widely used when evaluating scientists. Intrigued by a citation-boosting service that we unravelled during our investigation, we contacted the service while undercover as a fictional author, and managed to purchase 50 citations. These findings provide conclusive evidence that citations can be bought in bulk, and highlight the need to look beyond citation counts

    Exploring the Potential of Generative AI for the World Wide Web

    Full text link
    Generative Artificial Intelligence (AI) is a cutting-edge technology capable of producing text, images, and various media content leveraging generative models and user prompts. Between 2022 and 2023, generative AI surged in popularity with a plethora of applications spanning from AI-powered movies to chatbots. In this paper, we delve into the potential of generative AI within the realm of the World Wide Web, specifically focusing on image generation. Web developers already harness generative AI to help crafting text and images, while Web browsers might use it in the future to locally generate images for tasks like repairing broken webpages, conserving bandwidth, and enhancing privacy. To explore this research area, we have developed WebDiffusion, a tool that allows to simulate a Web powered by stable diffusion, a popular text-to-image model, from both a client and server perspective. WebDiffusion further supports crowdsourcing of user opinions, which we use to evaluate the quality and accuracy of 409 AI-generated images sourced from 60 webpages. Our findings suggest that generative AI is already capable of producing pertinent and high-quality Web images, even without requiring Web designers to manually input prompts, just by leveraging contextual information available within the webpages. However, we acknowledge that direct in-browser image generation remains a challenge, as only highly powerful GPUs, such as the A40 and A100, can (partially) compete with classic image downloads. Nevertheless, this approach could be valuable for a subset of the images, for example when fixing broken webpages or handling highly private content.Comment: 11 pages, 9 figure

    I Tag, You Tag, Everybody Tags!

    Full text link
    Location tags are designed to track personal belongings. Nevertheless, there has been anecdotal evidence that location tags are also misused to stalk people. Tracking is achieved locally, e.g., via Bluetooth with a paired phone, and remotely, by piggybacking on location-reporting devices which come into proximity of a tag. This paper studies the performance of the two most popular location tags (Apple's AirTag and Samsung's SmartTag) through controlled experiments - with a known large distribution of location-reporting devices - as well as in-the-wild experiments - with no control on the number and kind of reporting devices encountered, thus emulating real-life use-cases. We find that both tags achieve similar performance, e.g., they are located 55% of the times in about 10 minutes within a 100 m radius. It follows that real time stalking to a precise location via location tags is impractical, even when both tags are concurrently deployed which achieves comparable accuracy in half the time. Nevertheless, half of a victim's exact movements can be backtracked accurately (10m error) with just a one-hour delay, which is still perilous information in the possession of a stalker.Comment: 8 pages, 8 figure

    HowkGPT: Investigating the Detection of ChatGPT-generated University Student Homework through Context-Aware Perplexity Analysis

    Full text link
    As the use of Large Language Models (LLMs) in text generation tasks proliferates, concerns arise over their potential to compromise academic integrity. The education sector currently tussles with distinguishing student-authored homework assignments from AI-generated ones. This paper addresses the challenge by introducing HowkGPT, designed to identify homework assignments generated by AI. HowkGPT is built upon a dataset of academic assignments and accompanying metadata [17] and employs a pretrained LLM to compute perplexity scores for student-authored and ChatGPT-generated responses. These scores then assist in establishing a threshold for discerning the origin of a submitted assignment. Given the specificity and contextual nature of academic work, HowkGPT further refines its analysis by defining category-specific thresholds derived from the metadata, enhancing the precision of the detection. This study emphasizes the critical need for effective strategies to uphold academic integrity amidst the growing influence of LLMs and provides an approach to ensuring fair and accurate grading in educational institutions

    ANALISIS TINDAKAN PERATAAN LABA DALAM MERAIH KEUNTUNGAN PERUSAHAAN DITINJAU MENURUT ETIKA EKONOMI ISLAM

    Get PDF
    Ethical perspective is an important matter in managing a business as it uses to synchronize the business interest and morality concept. The objective of this article is to understand the ethical perspective of Islamic economics on income smoothing practices within companies in order to increase benefit. This study employs descriptive-comparative method with a causal-comparative approach as an analysis tool. Data was gathered through literature searching of related sources. The findings show that income smoothing practices are categorized as dysfunctional behavior in the context Islamic economics ethical system. These practices violate the basic objective of Shariah as they involve tadlis and gharar. Information regarding the transaction conducted in these practices is not revealed to all involved parties. =========================================== Perspektif etika sangat penting dalam suatu perusahaan karena merupakan cara untuk menyelaraskan kepentingan strategis suatu bisnis perusahaan dengan tuntutan moralitas. Penelitian ini ditujukan untuk memahami tinjauan etika ekonomi Islam mengenai tindakan perataan laba yang dilakukan perusahaan untuk meraih keuntungan. Metode yang digunakan dalam penelitian ini adalah deskriptif-komparatif dengan pendekatan kausal komparatif. Data untuk penelitian ini dikumpulkan dari sumber literatur yang berkaitan. Penelitian ini menyimpulkan bahwa tindakan perataan laba merupakan tindakan penyimpangan dalam tataran etika ekonomi Islam. Tindakan ini melanggar konsep maqashid syariah karena mengandung unsur penipuan (tadlis) dan ketidakjelasan (gharar). Informasi berkaitan dengan transaksi yang dilakukan dalam tindakan perataan laba tidak semua diketahui oleh pihak-pihak yang terlibat

    Learning congestion over millimeter-wave channels

    Get PDF
    This paper studies how learning techniques can be used by the congestion control algorithms employed by transport protocols over 5G wireless channels, in particular millimeter waves. We show how metrics measured at the transport layer might be valuable to ascertain the congestion level. In situations characterized by a high correlation between such parameters and the actual congestion, it is observed that the performance of unsupervised learning methods is comparable to supervised learning approaches. Exploiting the ns-3 platform to perform an in-depth, realistic assessment, allows us to study the impact of various layers of the protocol stack. We also consider different scheduling policies to discriminate whether the allocation of radio resources impacts the performance of the proposed scheme.This work has been funded by the Spanish Government (Ministerio de Economía y Competitividad, Fondo Europeo de Desarrollo Regional, MINECO-FEDER) by means of the project FIERCE: Future Internet Enabled Resilient smart CitiEs (RTI2018-093475-AI00)

    Can we exploit machine learning to predict congestion over mmWave 5G channels?

    Get PDF
    It is well known that transport protocol performance is severely hindered by wireless channel impairments. We study the applicability of Machine Learning (ML) techniques to predict congestion status of 5G access networks, in particular mmWave links. We use realistic traces, using the 3GPP channel models, without being affected using legacy congestion-control solutions. We start by identifying the metrics that might be exploited from the transport layer to learn the congestion state: delay and inter-arrival time. We formally study their correlation with the perceived congestion, which we ascertain based on buffer length variation. Then, we conduct an extensive analysis of various unsupervised and supervised solutions, which are used as a benchmark. The results yield that unsupervised ML solutions can detect a large percentage of congestion situations and they could thus bring interesting possibilities when designing congestion-control solutions for next-generation transport protocols.This work was supported by the Spanish Government (MINECO) by means of the project FIERCE “Future Internet Enabled Resilient smart CitiEs” under Grant Agreement No. RTI2018-093475-A-I00

    Car detection using cascade classifier on embedded platform

    Get PDF
    Advanced Driver-Assistance Systems (ADAS) help reducing traffic accidents caused by distracted driving. One of the features of ADAS is Forward Collision Warning System (FCWS). In FCWS, car detection is a crucial step. This paper explains about car detection system using cascade classifier running on embedded platform. The embedded platform used is NXP SBC-S32V234 evaluation board with 64-bit Quad ARM Cortex-A53. The system algorithm is developed in C++ programming language and used open source computer vision library, OpenCV. For car detection process, object detection by cascade classifier method is used. We trained the cascade detector using positive and negative instances mostly from our self-collected Malaysian road dataset. The tested car detection system gives about 88.3 percent detection accuracy with images of 340 by 135 resolution (after cropped and resized). When running on the embedded platform, it managed to get average 13 frames per second with video file input and average 15 frames per second with camera input
    corecore