60 research outputs found

    Generative AI for Consumer Electronics: Enhancing User Experience with Cognitive and Semantic Computing

    Get PDF
    Generative Artificial Intelligence(GAI) models such as ChatGPT , DALL-E , and the recently introduced Gemini have attracted considerable interest in both business and academia because of their capacity to produce material in response to human inputs. Cognitive computing is a broader field of machine learning that encompasses GAI, which particularly emphasizes systems capable of creating content, such as images, text, or sound, while semantic computing acts as a fundamental element of GAI, furnishing the comprehension of context and significance essential for GAI systems to generate content akin to human-like standards. GAI is becoming a game-changing technology for consumer electronics industry with a variety of applications that improve user experiences and product development. GAI can revolutionise architectural visualisation by facilitating quick prototyping and the investigation of cutting-edge design ideas. By creating unique compositions and graphics for a variety of applications, it also empowers media production and music composition. Our research identifies several applications of GAI in the consumer electronics industry. We analyze how GAI is utilized in augmented reality (AR) applications, optimizing user interactions and immersive experiences. Moreover, we explore the integration of GAI in voice assistants and virtual avatars, enhancing images, natural language understanding and delivering more personalized interactions. We present a novel case study on a Generative Artificial Intelligence-based Framework for answering consumer electronics queries. We have developed and presented the system using various GAI-based tools and integrations. The paper also discusses the challenges in implementing GAI in consumer electronics, such as ethical considerations, data privacy, compatibility with existing systems, and the need for continuous updates and improvements

    Generative AI for Finance: Applications, Case Studies and Challenges

    Get PDF
    Generative AI (GAI), which has become increasingly popular nowadays, can be considered a brilliant computational machine that can not only assist with simple searching and organising tasks but also possesses the capability to propose new ideas, make decisions on its own and derive better conclusions from complex inputs. Finance comprises various difficult and time-consuming tasks that require significant human effort and are highly prone to errors, such as creating and managing financial documents and reports. Hence, incorporating GAI to simplify processes and make them hassle-free will be consequential. Integrating GAI with finance can open new doors of possibility. With its capacity to enhance decision-making and provide more effective personalised insights, it has the power to optimise financial procedures. In this paper, we address the research gap of the lack of a detailed study exploring the possibilities and advancements of the integration of GAI with finance. We discuss applications that include providing financial consultations to customers, making predictions about the stock market, identifying and addressing fraudulent activities, evaluating risks, and organising unstructured data. We explore real-world examples of GAI, including Finance generative pre-trained transformer (GPT), Bloomberg GPT, and so forth. We look closer at how finance professionals work with AI-integrated systems and tools and how this affects the overall process. We address the challenges presented by comprehensibility, bias, resource demands, and security issues while at the same time emphasising solutions such as GPTs specialised in financial contexts. To the best of our knowledge, this is the first comprehensive paper dealing with GAI for finance

    A novel end-to-end deep convolutional neural network based skin lesion classification framework

    Get PDF
    Background:Skin diseases are reported to contribute 1.79% of the global burden of disease. The accurate diagnosis of specific skin diseases is known to be a challenging task due, in part, to variations in skin tone, texture, body hair, etc. Classification of skin lesions using machine learning is a demanding task, due to the varying shapes, sizes, colors, and vague boundaries of some lesions. The use of deep learning for the classification of skin lesion images has been shown to help diagnose the disease at its early stages. Recent studies have demonstrated that these models perform well in skin detection tasks, with high accuracy and efficiency.Objective:Our paper proposes an end-to-end framework for skin lesion classification, and our contributions are two-fold. Firstly, two fundamentally different algorithms are proposed for segmenting and extracting features from images during image preprocessing. Secondly, we present a deep convolutional neural network model, S-MobileNet that aims to classify 7 different types of skin lesions.Methods:We used the HAM10000 dataset, which consists of 10000 dermatoscopic images from different populations and is publicly available through the International Skin Imaging Collaboration (ISIC) Archive. The image data was preprocessed to make it suitable for modeling. Exploratory data analysis (EDA) was performed to understand various attributes and their relationships within the dataset. A modified version of a Gaussian filtering algorithm and SFTA was applied for image segmentation and feature extraction. The processed dataset was then fed into the S-MobileNet model. This model was designed to be lightweight and was analysed in three dimensions: using the Relu Activation function, the Mish activation function, and applying compression at intermediary layers. In addition, an alternative approach for compressing layers in the S-MobileNet architecture was applied to ensure a lightweight model that does not compromise on performance.Results:The model was trained using several experiments and assessed using various performance measures, including, loss, accuracy, precision, and the F1-score. Our results demonstrate an improvement in model performance when applying a preprocessing technique. The Mish activation function was shown to outperform Relu. Further, the classification accuracy of the compressed S-MobileNet was shown to outperform S-MobileNet.Conclusions:To conclude, our findings have shown that our proposed deep learning-based S-MobileNet model is the optimal approach for classifying skin lesion images in the HAM10000 dataset. In the future, our approach could be adapted and applied to other datasets, and validated to develop a skin lesion framework that can be utilised in real-time

    AI-enabled remote monitoring of vital signs for COVID-19: methods, prospects and challenges

    Get PDF
    The COVID-19 pandemic has overwhelmed the existing healthcare infrastructure in many parts of the world. Healthcare professionals are not only over-burdened but also at a high risk of nosocomial transmission from COVID-19 patients. Screening and monitoring the health of a large number of susceptible or infected individuals is a challenging task. Although professional medical attention and hospitalization are necessary for high-risk COVID-19 patients, home isolation is an effective strategy for low and medium risk patients as well as for those who are at risk of infection and have been quarantined. However, this necessitates effective techniques for remotely monitoring the patients’ symptoms. Recent advances in Machine Learning (ML) and Deep Learning (DL) have strengthened the power of imaging techniques and can be used to remotely perform several tasks that previously required the physical presence of a medical professional. In this work, we study the prospects of vital signs monitoring for COVID-19 infected as well as quarantined individuals by using DL and image/signal-processing techniques, many of which can be deployed using simple cameras and sensors available on a smartphone or a personal computer, without the need of specialized equipment. We demonstrate the potential of ML-enabled workflows for several vital signs such as heart and respiratory rates, cough, blood pressure, and oxygen saturation. We also discuss the challenges involved in implementing ML-enabled techniques
    • …
    corecore