579 research outputs found

    Research Article Software Component Selection Based on Quality Criteria Using the Analytic Network Process

    Get PDF
    Component based software development (CBSD) endeavors to deliver cost-effective and quality software systems through the selection and integration of commercially available software components. CBSD emphasizes the design and development of software systems using preexisting components. Software component reusability is an indispensable part of component based software development life cycle (CBSDLC),which consumes a significant amount of organization’s resources, that is, time and effort. It is convenient in component based software system (CBSS) to select the most suitable and appropriate software components that provide all the required functionalities. Selecting the most appropriate components is crucial for the success of the entire system. However, decisions regarding software component reusability are often made in an ad hoc manner, which ultimately results in schedule delay and lowers the entire quality system. In this paper, we have discussed the analytic network process (ANP) method for software component selection. The methodology is explained and assessed using a real life case study

    The Encyclopedia of Neutrosophic Researchers - vol. 1

    Get PDF
    This is the first volume of the Encyclopedia of Neutrosophic Researchers, edited from materials offered by the authors who responded to the editor’s invitation. The authors are listed alphabetically. The introduction contains a short history of neutrosophics, together with links to the main papers and books. Neutrosophic set, neutrosophic logic, neutrosophic probability, neutrosophic statistics, neutrosophic measure, neutrosophic precalculus, neutrosophic calculus and so on are gaining significant attention in solving many real life problems that involve uncertainty, impreciseness, vagueness, incompleteness, inconsistent, and indeterminacy. In the past years the fields of neutrosophics have been extended and applied in various fields, such as: artificial intelligence, data mining, soft computing, decision making in incomplete / indeterminate / inconsistent information systems, image processing, computational modelling, robotics, medical diagnosis, biomedical engineering, investment problems, economic forecasting, social science, humanistic and practical achievements

    Anomaly Detection in RFID Networks

    Get PDF
    Available security standards for RFID networks (e.g. ISO/IEC 29167) are designed to secure individual tag-reader sessions and do not protect against active attacks that could also compromise the system as a whole (e.g. tag cloning or replay attacks). Proper traffic characterization models of the communication within an RFID network can lead to better understanding of operation under “normal” system state conditions and can consequently help identify security breaches not addressed by current standards. This study of RFID traffic characterization considers two piecewise-constant data smoothing techniques, namely Bayesian blocks and Knuth’s algorithms, over time-tagged events and compares them in the context of rate-based anomaly detection. This was accomplished using data from experimental RFID readings and comparing (1) the event counts versus time if using the smoothed curves versus empirical histograms of the raw data and (2) the threshold-dependent alert-rates based on inter-arrival times obtained if using the smoothed curves versus that of the raw data itself. Results indicate that both algorithms adequately model RFID traffic in which inter-event time statistics are stationary but that Bayesian blocks become superior for traffic in which such statistics experience abrupt changes

    A game player expertise level classification system using electroencephalography (EEG)

    Get PDF
    The success and wider adaptability of smart phones has given a new dimension to the gaming industry. Due to the wide spectrum of video games, the success of a particular game depends on how efficiently it is able to capture the end users' attention. This leads to the need to analyse the cognitive aspects of the end user, that is the game player, during game play. A direct window to see how an end user responds to a stimuli is to look at their brain activity. In this study, electroencephalography (EEG) is used to record human brain activity during game play. A commercially available EEG headset is used for this purpose giving fourteen channels of recorded EEG brain activity. The aim is to classify a player as expert or novice using the brain activity as the player indulges in the game play. Three different machine learning classifiers have been used to train and test the system. Among the classifiers, naive Bayes has outperformed others with an accuracy of 88%, when data from all fourteen EEG channels are used. Furthermore, the activity observed on electrodes is statistically analysed and mapped for brain visualizations. The analysis has shown that out of the available fourteen channels, only four channels in the frontal and occipital brain regions show significant activity. Features of these four channels are then used, and the performance parameters of the four-channel classification are compared to the results of the fourteen-channel classification. It has been observed that support vector machine and the naive Bayes give good classification accuracy and processing time, well suited for real-time applications

    Role of images on World Wide Web readability

    Get PDF
    As the Internet and World Wide Web have grown, many good things have come. If you have access to a computer, you can find a lot of information quickly and easily. Electronic devices can store and retrieve vast amounts of data in seconds. You no longer have to leave your house to get products and services you could only get in person. Documents can be changed from English to Urdu or from text to speech almost instantly, making it easy for people from different cultures and with different abilities to talk to each other. As technology improves, web developers and website visitors want more animation, colour, and technology. As computers get faster at processing images and other graphics, web developers use them more and more. Users who can see colour, pictures, animation, and images can help understand and read the Web and improve the Web experience. People who have trouble reading or whose first language is not used on the website can also benefit from using pictures. But not all images help people understand and read the text they go with. For example, images just for decoration or picked by the people who made the website should not be used. Also, different factors could affect how easy it is to read graphical content, such as a low image resolution, a bad aspect ratio, a bad colour combination in the image itself, a small font size, etc., and the WCAG gave different rules for each of these problems. The rules suggest using alternative text, the right combination of colours, low contrast, and a higher resolution. But one of the biggest problems is that images that don't go with the text on a web page can make it hard to read the text. On the other hand, relevant pictures could make the page easier to read. A method has been suggested to figure out how relevant the images on websites are from the point of view of web readability. This method combines different ways to get information from images by using Cloud Vision API and Optical Character Recognition (OCR), and reading text from websites to find relevancy between them. Techniques for preprocessing data have been used on the information that has been extracted. Natural Language Processing (NLP) technique has been used to determine what images and text on a web page have to do with each other. This tool looks at fifty educational websites' pictures and assesses their relevance. Results show that images that have nothing to do with the page's content and images that aren't very good cause lower relevancy scores. A user study was done to evaluate the hypothesis that the relevant images could enhance web readability based on two evaluations: the evaluation of the 1024 end users of the page and the heuristic evaluation, which was done by 32 experts in accessibility. The user study was done with questions about what the user knows, how they feel, and what they can do. The results back up the idea that images that are relevant to the page make it easier to read. This method will help web designers make pages easier to read by looking at only the essential parts of a page and not relying on their judgment.Programa de Doctorado en Ciencia y Tecnología Informática por la Universidad Carlos III de MadridPresidente: José Luis Lépez Cuadrado.- Secretario: Divakar Yadav.- Vocal: Arti Jai

    Forensic investigation of small-scale digital devices: a futuristic view

    Get PDF
    Small-scale digital devices like smartphones, smart toys, drones, gaming consoles, tablets, and other personal data assistants have now become ingrained constituents in our daily lives. These devices store massive amounts of data related to individual traits of users, their routine operations, medical histories, and financial information. At the same time, with continuously evolving technology, the diversity in operating systems, client storage localities, remote/cloud storages and backups, and encryption practices renders the forensic analysis task multi-faceted. This makes forensic investigators having to deal with an array of novel challenges. This study reviews the forensic frameworks and procedures used in investigating small-scale digital devices. While highlighting the challenges faced by digital forensics, we explore how cutting-edge technologies like Blockchain, Artificial Intelligence, Machine Learning, and Data Science may play a role in remedying concerns. The review aims to accumulate state-of-the-art and identify a futuristic approach for investigating SSDDs

    SCNN-Attack: A Side-Channel Attack to Identify YouTube Videos in a VPN and Non-VPN Network Traffic

    Get PDF
    Encryption Protocols e.g., HTTPS is utilized to secure the traffic between servers and clients for YouTube and other video streaming services, and to further secure the communication, VPNs are used. However, these protocols are not sufficient to hide the identity of the videos from someone who can sniff the network traffic. The present work explores the methodologies and features to identify the videos in a VPN and non-VPN network traffic. To identify such videos, a side-channel attack using a Sequential Convolution Neural Network is proposed. The results demonstrate that a sequence of bytes per second from even one-minute sniffing of network traffic is sufficient to predict the video with high accuracy. The accuracy is increased to 90% accuracy in the non-VPN, 66% accuracy in the VPN, and 77% in the mixed VPN and non-VPN traffic, for models with two-minute sniffing
    corecore