457 research outputs found

    Resource Management for Edge Computing in Internet of Things (IoT)

    Get PDF
    Die große Anzahl an Geräten im Internet der Dinge (IoT) und deren kontinuierliche Datensammlungen führen zu einem rapiden Wachstum der gesammelten Datenmenge. Die Daten komplett mittels zentraler Cloud Server zu verarbeiten ist ineffizient und zum Teil sogar unmöglich oder unnötig. Darum wird die Datenverarbeitung an den Rand des Netzwerks verschoben, was zu den Konzepten des Edge Computings geführt hat. Informationsverarbeitung nahe an der Datenquelle (z.B. auf Gateways und Edge Geräten) reduziert nicht nur die hohe Arbeitslast zentraler Server und Netzwerke, sondern verringer auch die Latenz für Echtzeitanwendungen, da die potentiell unzuverlässige Kommunikation zu Cloud Servern mit ihrer unvorhersehbaren Netzwerklatenz vermieden wird. Aktuelle IoT Architekturen verwenden Gateways, um anwendungsspezifische Verbindungen zu IoT Geräten herzustellen. In typischen Konfigurationen teilen sich mehrere IoT Edge Geräte ein IoT Gateway. Wegen der begrenzten verfügbaren Bandbreite und Rechenkapazität eines IoT Gateways muss die Servicequalität (SQ) der verbundenen IoT Edge Geräte über die Zeit angepasst werden. Nicht nur um die Anforderungen der einzelnen Nutzer der IoT Geräte zu erfüllen, sondern auch um die SQBedürfnisse der anderen IoT Edge Geräte desselben Gateways zu tolerieren. Diese Arbeit untersucht zuerst essentielle Technologien für IoT und existierende Trends. Dabei werden charakteristische Eigenschaften von IoT für die Embedded Domäne, sowie eine umfassende IoT Perspektive für Eingebettete Systeme vorgestellt. Mehrere Anwendungen aus dem Gesundheitsbereich werden untersucht und implementiert, um ein Model für deren Datenverarbeitungssoftware abzuleiten. Dieses Anwendungsmodell hilft bei der Identifikation verschiedener Betriebsmodi. IoT Systeme erwarten von den Edge Geräten, dass sie mehrere Betriebsmodi unterstützen, um sich während des Betriebs an wechselnde Szenarien anpassen zu können. Z.B. Energiesparmodi bei geringen Batteriereserven trotz gleichzeitiger Aufrechterhaltung der kritischen Funktionalität oder einen Modus, um die Servicequalität auf Wunsch des Nutzers zu erhöhen etc. Diese Modi verwenden entweder verschiedene Auslagerungsschemata (z.B. die übertragung von Rohdaten, von partiell bearbeiteten Daten, oder nur des finalen Ergebnisses) oder verschiedene Servicequalitäten. Betriebsmodi unterscheiden sich in ihren Ressourcenanforderungen sowohl auf dem Gerät (z.B. Energieverbrauch), wie auch auf dem Gateway (z.B. Kommunikationsbandbreite, Rechenleistung, Speicher etc.). Die Auswahl des besten Betriebsmodus für Edge Geräte ist eine Herausforderung in Anbetracht der begrenzten Ressourcen am Rand des Netzwerks (z.B. Bandbreite und Rechenleistung des gemeinsamen Gateways), diverser Randbedingungen der IoT Edge Geräte (z.B. Batterielaufzeit, Servicequalität etc.) und der Laufzeitvariabilität am Rand der IoT Infrastruktur. In dieser Arbeit werden schnelle und effiziente Auswahltechniken für Betriebsmodi entwickelt und präsentiert. Wenn sich IoT Geräte in der Reichweite mehrerer Gateways befinden, ist die Verwaltung der gemeinsamen Ressourcen und die Auswahl der Betriebsmodi für die IoT Geräte sogar noch komplexer. In dieser Arbeit wird ein verteilter handelsorientierter Geräteverwaltungsmechanismus für IoT Systeme mit mehreren Gateways präsentiert. Dieser Mechanismus zielt auf das kombinierte Problem des Bindens (d.h. ein Gateway für jedes IoT Gerät bestimmen) und der Allokation (d.h. die zugewiesenen Ressourcen für jedes Gerät bestimmen) ab. Beginnend mit einer initialen Konfiguration verhandeln und kommunizieren die Gateways miteinander und migrieren IoT Geräte zwischen den Gateways, wenn es den Nutzen für das Gesamtsystem erhöht. In dieser Arbeit werden auch anwendungsspezifische Optimierungen für IoT Geräte vorgestellt. Drei Anwendungen für den Gesundheitsbereich wurden realisiert und für tragbare IoT Geräte untersucht. Es wird auch eine neuartige Kompressionsmethode vorgestellt, die speziell für IoT Anwendungen geeignet ist, die Bio-Signale für Gesundheitsüberwachungen verarbeiten. Diese Technik reduziert die zu übertragende Datenmenge des IoT Gerätes, wodurch die Ressourcenauslastung auf dem Gerät und dem gemeinsamen Gateway reduziert wird. Um die vorgeschlagenen Techniken und Mechanismen zu evaluieren, wurden einige Anwendungen auf IoT Plattformen untersucht, um ihre Parameter, wie die Ausführungszeit und Ressourcennutzung, zu bestimmen. Diese Parameter wurden dann in einem Rahmenwerk verwendet, welches das IoT Netzwerk modelliert, die Interaktion zwischen Geräten und Gateway erfasst und den Kommunikationsoverhead sowie die erreichte Batterielebenszeit und Servicequalität der Geräte misst. Die Algorithmen zur Auswahl der Betriebsmodi wurden zusätzlich auf IoT Plattformen implementiert, um ihre Overheads bzgl. Ausführungszeit und Speicherverbrauch zu messen

    Joint Communication and Computation Framework for Goal-Oriented Semantic Communication with Distortion Rate Resilience

    Full text link
    Recent research efforts on semantic communication have mostly considered accuracy as a main problem for optimizing goal-oriented communication systems. However, these approaches introduce a paradox: the accuracy of artificial intelligence (AI) tasks should naturally emerge through training rather than being dictated by network constraints. Acknowledging this dilemma, this work introduces an innovative approach that leverages the rate-distortion theory to analyze distortions induced by communication and semantic compression, thereby analyzing the learning process. Specifically, we examine the distribution shift between the original data and the distorted data, thus assessing its impact on the AI model's performance. Founding upon this analysis, we can preemptively estimate the empirical accuracy of AI tasks, making the goal-oriented semantic communication problem feasible. To achieve this objective, we present the theoretical foundation of our approach, accompanied by simulations and experiments that demonstrate its effectiveness. The experimental results indicate that our proposed method enables accurate AI task performance while adhering to network constraints, establishing it as a valuable contribution to the field of signal processing. Furthermore, this work advances research in goal-oriented semantic communication and highlights the significance of data-driven approaches in optimizing the performance of intelligent systems.Comment: 15 pages; 11 figures, 2 table

    Efficient Encoding of Wireless Capsule Endoscopy Images Using Direct Compression of Colour Filter Array Images

    Get PDF
    Since its invention in 2001, wireless capsule endoscopy (WCE) has played an important role in the endoscopic examination of the gastrointestinal tract. During this period, WCE has undergone tremendous advances in technology, making it the first-line modality for diseases from bleeding to cancer in the small-bowel. Current research efforts are focused on evolving WCE to include functionality such as drug delivery, biopsy, and active locomotion. For the integration of these functionalities into WCE, two critical prerequisites are the image quality enhancement and the power consumption reduction. An efficient image compression solution is required to retain the highest image quality while reducing the transmission power. The issue is more challenging due to the fact that image sensors in WCE capture images in Bayer Colour filter array (CFA) format. Therefore, standard compression engines provide inferior compression performance. The focus of this thesis is to design an optimized image compression pipeline to encode the capsule endoscopic (CE) image efficiently in CFA format. To this end, this thesis proposes two image compression schemes. First, a lossless image compression algorithm is proposed consisting of an optimum reversible colour transformation, a low complexity prediction model, a corner clipping mechanism and a single context adaptive Golomb-Rice entropy encoder. The derivation of colour transformation that provides the best performance for a given prediction model is considered as an optimization problem. The low complexity prediction model works in raster order fashion and requires no buffer memory. The application of colour transformation yields lower inter-colour correlation and allows the efficient independent encoding of the colour components. The second compression scheme in this thesis is a lossy compression algorithm with a integer discrete cosine transformation at its core. Using the statistics obtained from a large dataset of CE image, an optimum colour transformation is derived using the principal component analysis (PCA). The transformed coefficients are quantized using optimized quantization table, which was designed with a focus to discard medically irrelevant information. A fast demosaicking algorithm is developed to reconstruct the colour image from the lossy CFA image in the decoder. Extensive experiments and comparisons with state-of-the-art lossless image compression methods establish the superiority of the proposed compression methods as simple and efficient image compression algorithm. The lossless algorithm can transmit the image in a lossless manner within the available bandwidth. On the other hand, performance evaluation of lossy compression algorithm indicates that it can deliver high quality images at low transmission power and low computation costs

    ResFed: Communication Efficient Federated Learning by Transmitting Deep Compressed Residuals

    Full text link
    Federated learning enables cooperative training among massively distributed clients by sharing their learned local model parameters. However, with increasing model size, deploying federated learning requires a large communication bandwidth, which limits its deployment in wireless networks. To address this bottleneck, we introduce a residual-based federated learning framework (ResFed), where residuals rather than model parameters are transmitted in communication networks for training. In particular, we integrate two pairs of shared predictors for the model prediction in both server-to-client and client-to-server communication. By employing a common prediction rule, both locally and globally updated models are always fully recoverable in clients and the server. We highlight that the residuals only indicate the quasi-update of a model in a single inter-round, and hence contain more dense information and have a lower entropy than the model, comparing to model weights and gradients. Based on this property, we further conduct lossy compression of the residuals by sparsification and quantization and encode them for efficient communication. The experimental evaluation shows that our ResFed needs remarkably less communication costs and achieves better accuracy by leveraging less sensitive residuals, compared to standard federated learning. For instance, to train a 4.08 MB CNN model on CIFAR-10 with 10 clients under non-independent and identically distributed (Non-IID) setting, our approach achieves a compression ratio over 700X in each communication round with minimum impact on the accuracy. To reach an accuracy of 70%, it saves around 99% of the total communication volume from 587.61 Mb to 6.79 Mb in up-streaming and to 4.61 Mb in down-streaming on average for all clients

    Federated Learning and Meta Learning:Approaches, Applications, and Directions

    Get PDF
    Over the past few years, significant advancements have been made in the field of machine learning (ML) to address resource management, interference management, autonomy, and decision-making in wireless networks. Traditional ML approaches rely on centralized methods, where data is collected at a central server for training. However, this approach poses a challenge in terms of preserving the data privacy of devices. To address this issue, federated learning (FL) has emerged as an effective solution that allows edge devices to collaboratively train ML models without compromising data privacy. In FL, local datasets are not shared, and the focus is on learning a global model for a specific task involving all devices. However, FL has limitations when it comes to adapting the model to devices with different data distributions. In such cases, meta learning is considered, as it enables the adaptation of learning models to different data distributions using only a few data samples. In this tutorial, we present a comprehensive review of FL, meta learning, and federated meta learning (FedMeta). Unlike other tutorial papers, our objective is to explore how FL, meta learning, and FedMeta methodologies can be designed, optimized, and evolved, and their applications over wireless networks. We also analyze the relationships among these learning algorithms and examine their advantages and disadvantages in real-world applications.</p

    TinyML: Tools, Applications, Challenges, and Future Research Directions

    Full text link
    In recent years, Artificial Intelligence (AI) and Machine learning (ML) have gained significant interest from both, industry and academia. Notably, conventional ML techniques require enormous amounts of power to meet the desired accuracy, which has limited their use mainly to high-capability devices such as network nodes. However, with many advancements in technologies such as the Internet of Things (IoT) and edge computing, it is desirable to incorporate ML techniques into resource-constrained embedded devices for distributed and ubiquitous intelligence. This has motivated the emergence of the TinyML paradigm which is an embedded ML technique that enables ML applications on multiple cheap, resource- and power-constrained devices. However, during this transition towards appropriate implementation of the TinyML technology, multiple challenges such as processing capacity optimization, improved reliability, and maintenance of learning models' accuracy require timely solutions. In this article, various avenues available for TinyML implementation are reviewed. Firstly, a background of TinyML is provided, followed by detailed discussions on various tools supporting TinyML. Then, state-of-art applications of TinyML using advanced technologies are detailed. Lastly, various research challenges and future directions are identified.Comment: 12 pags, 3 tables, 4 figure

    Worker-robot cooperation and integration into the manufacturing workcell via the holonic control architecture

    Get PDF
    Cooperative manufacturing is a new field of research, which addresses new challenges beyond the physical safety of the worker. Those new challenges appear due to the need to connect the worker and the cobot from the informatics point of view in one cooperative workcell. This requires developing an appropriate manufacturing control system, which fits the nature of both the worker and the cobot. Furthermore, the manufacturing control system must be able to understand the production variations, to guide the cooperation between worker and the cobot and adapt with the production variations.Die kooperative Fertigung ist ein neues Forschungsgebiet, das sich neuen Herausforderungen stellt. Diese neuen Herausforderungen ergeben sich aus der Notwendigkeit, den Arbeiter und den Cobot aus der Sicht der Informatik in einem kooperativen Arbeitsplatz zu verbinden. Dies erfordert die Entwicklung eines geeigneten Produktionskontrollsystems, das sowohl der Natur des Arbeiters als auch der des Cobots entspricht. DarĂĽber hinaus muss die Fertigungssteuerung in der Lage sein, die Produktionsschwankungen zu verstehen, um die Zusammenarbeit zwischen Arbeiter und Cobot zu steuern

    Towards Computational Efficiency of Next Generation Multimedia Systems

    Get PDF
    To address throughput demands of complex applications (like Multimedia), a next-generation system designer needs to co-design and co-optimize the hardware and software layers. Hardware/software knobs must be tuned in synergy to increase the throughput efficiency. This thesis provides such algorithmic and architectural solutions, while considering the new technology challenges (power-cap and memory aging). The goal is to maximize the throughput efficiency, under timing- and hardware-constraints
    • …
    corecore