10 research outputs found
RFID Based Smart Shopping Trolley with PCB Design
Shopping is captivating and addictive, but standing in never-ending billing queues is a tiring process. Even though various online shopping platforms are growing, retail shopping has never stepped back. To ease the shopping process and eliminate lengthy billing queues, a new product has to be introduced with the appropriate use of technology. This paper summarizes a Radio Frequency Identification-based smart shopping trolley that provides automatic billing and customer satisfaction. The RFID tags on the products are read by the reader present on the shopping cart. The display screen present in the cart provides the cost information and gives the final billing amount. This is then sent to the cashier`s database using a Wi-Fi module and hence saves the customer's time. This entire model is designed on the PCB and simulated
Detection of AI-Generated Images
Generative AI has gained enormous interest nowadays due to new applications like ChatGPT, DALL E, Stable Diffusion, and Deepfake. In particular, DALL E, Stable Diffusion, and others (Adobe Firefly, ImagineArt, etc.) can create images from a text prompt and are even able to create photorealistic images. Due to this fact, intense research has been performed to create new image forensics applications able to distinguish between real captured images and videos and artificial ones. Detecting forgeries made with Deepfake is one of the most researched issues. This paper is about another kind of forgery detection. The purpose of this research is to detect photorealistic AI-created images versus real photos coming from a physical camera. Id est, making a binary decision over an image, asking whether it is artificially or naturally created. Artificial images do not need to try to represent any real object, person, or place. For this purpose, techniques that perform a pixel-level feature extraction are used. The first one is Photo Response Non-Uniformity (PRNU). PRNU is a special noise due to imperfections on the camera sensor that is used for source camera identification. The underlying idea is that AI images will have a different PRNU pattern. The second one is error level analysis (ELA). This is another type of feature extraction traditionally used for detecting image editing. ELA is being used nowadays by photographers for the manual detection of AI-created images. Both kinds of features are used to train convolutional neural networks to differentiate between AI images and real photographs. Good results are obtained, achieving accuracy rates of over 95%. Both extraction methods are carefully assessed by computing precision/recall and F1 -score measurements. The proliferation of AI-generated images, often referred to as deepfakes or synthetic media, has revolutionized how digital content is created, consumed, and shared. While these advancements offer immense creative potential, they also pose significant challenges, particularly in terms of authenticity and misinformation. This paper explores the growing need for AI-generated image detection, especially in social media applications, and delves into the methods used for distinguishing between human-made and AI-generated content. It outlines the technical challenges, the potential societal impact, and strategies for implementing robust detection mechanisms in social media platforms
Automatic Parallel Pattern Detection in the Algorithm Structure Design Space
Parallel design patterns have been developed to help programmers efficiently design and implement parallel applications. However, identifying a suitable parallel pattern for a specific code region in a sequential application is a difficult task. Transforming an application according to support structures applicable to these parallel patterns is also very challenging. In this paper, we present a novel approach to automatically find parallel patterns in the algorithm structure design space of sequential applications. In our approach, we classify code blocks in a region according to the appropriate supportstructure of the detected pattern. This classification eases the transformation of a sequential application into its parallel version. Weevaluated our approach on 17 applications from four different benchmark suites. Our method identified suitable algorithm structure patterns in the sequential applications. We confirmed our results by comparing them with the existing parallel versions of these applications. We also implemented the patterns we detected in cases in which parallel implementations were not available and achieved speedups of up to 14x
Unveiling parallelization opportunities in sequential programs
The stagnation of single-core performance leaves application developers with software parallelism as the only option to further benefit from Moore’s Law. However, in view of the complexity of writing parallel programs, the parallelization of myriads of sequential legacy programs presents a serious economic challenge. A key task in this process is the identification of suitable parallelization targets in the source code. In this paper, we present an approach to automatically identify potential parallelism in sequential programs of realistic size. In comparison to earlier approaches, our work combines a unique set of features that make it superior in terms of functionality: It not only (i) detects available parallelism with high accuracy but also (ii) identifies the parts of the code that can run in parallel—even if they are spread widely across the code, (iii) ranks parallelization opportunities according to the speedup expected for the entire program, while (iv) maintaining competitive overhead both in terms of time and memory
Parallelizing Audio Analysis Applications - A Case Study
As multicore computers become widespread, the need for software programmers to decide on the most effective parallelization techniques becomes very prominent. In this case study, we examined a competition in which four teams of graduate students parallelized two sequential audio analysis applications. The students were introduced with PThreads, OpenMP and TBB parallel programming models. Use of different profiling and debugging tools was also taught during this course. Two of the teams parallelized libVorbis audio encoder and the other two parallelized the LAME encoding engine. The strategies used by the four teams to parallelize these applications included the use of taught programming models, focusing on both fine-grained and coarse-grained parallelism. These strategies are discussed in detail along with the tools utilized for the development and profiling. An analysis of the results obtained is also performed to discuss speedups and audio quality of the encoded output. A list of the lessons to be remembered while parallelizing an application has been provided as well. These lessons include best pedagogical methods, importance of understanding the program before choosing a programming model, concentrating on coarse-grained parallelism first, looking for dependency relaxation, parallelism beyond the predefined language constructs, the need of practice or prior experience in parallel programming and the need for assisting tools in parallelization
Dissecting sequential programs for parallelization-An approach based on computational units
When trying to parallelize a sequential program, programmers routinely struggle during the first step: finding out which code sections can be made to run in parallel. While identifying such code sections, most of the current parallelism discovery techniques focus on specific language constructs. In contrast, we propose to concentrate on the computations performed by a program. In our approach, a program is treated as a collection of computations communicating with one another using a number of variables. Each computation is represented as a computational unit (CU). A CU contains the inputs and outputs of a computation, and the three phases of a computation are read, compute, and write. Based on the notion of CU, which ensures that the read phase executes before the write phase, we present a unified framework to identify both loop parallelism and task parallelism in sequential programs. We conducted a range of experiments on 23 applications from four different benchmark suites. Our approach accurately identified the parallelization opportunities in benchmark applications based on comparison with their parallel versions. We have also parallelized the opportunities identified by our approach that were not implemented in the parallel versions of the benchmarks and reported the speedup
