1,202 research outputs found

    AUTOMATED HIGH-SPEED MONITORING OF METAL TRANSFER FOR REAL-TIME CONTROL

    Get PDF
    In the novel Double Electrode Gas Metal Arc Welding (DE-GMAW), the transfer of the liquid metal from the wire to the work-piece determines the weld quality and for applications where the precision is critical, the metal transfer process needs to be monitored and controlled to control the diameter, trajectory, and transfer rate of the droplet of liquid metal. In this doctoral research work, the traditional methods of tracking, Correlation, Least Square Matching (LSM) and Kalman Filtering (KF), are tried first. All of them failed due to the poor quality of the metal transfer image and the variety of the droplet. Then several novel image processing algorithms, Brightness Based Separation Algorithm (BBSA), Brightness and Subtraction Based Separation Algorithm (BSBSA) and Brightness Based Selection and Edge Detection Based Enhancement Separation Algorithm (BBSEDBESA), are proposed to compute the size and locate the position of the droplet. Experimental results verified that the proposed algorithms can automatically locate the droplets and compute the droplet size with an adequate accuracy. Since the final objective is to automatically process the metal transfer in real time, a real time processing system is implemented and the details are described. In traditional Gas Metal Arc Welding (GMAW), the famous laser back-lighting technique has been widely used to image the metal transfer process. Due to laser imaging systems complexity, it is too inconvenient for practical applications. In this doctoral research work, a simplified laser imaging system is proposed and two effective image algorithms, Probability Based Double Thresholds Separation Algorithm and Edge Based Separation Algorithm, are proposed to process the corresponding captured metal transfer images. Experimental results verified that the proposed simplified laser back-light imaging system and image processing algorithms can be used for real time processing of metal transfer images

    Hercules Against Data Series Similarity Search

    Full text link
    We propose Hercules, a parallel tree-based technique for exact similarity search on massive disk-based data series collections. We present novel index construction and query answering algorithms that leverage different summarization techniques, carefully schedule costly operations, optimize memory and disk accesses, and exploit the multi-threading and SIMD capabilities of modern hardware to perform CPU-intensive calculations. We demonstrate the superiority and robustness of Hercules with an extensive experimental evaluation against state-of-the-art techniques, using many synthetic and real datasets, and query workloads of varying difficulty. The results show that Hercules performs up to one order of magnitude faster than the best competitor (which is not always the same). Moreover, Hercules is the only index that outperforms the optimized scan on all scenarios, including the hard query workloads on disk-based datasets. This paper was published in the Proceedings of the VLDB Endowment, Volume 15, Number 10, June 2022

    ATOM : a distributed system for video retrieval via ATM networks

    Get PDF
    The convergence of high speed networks, powerful personal computer processors and improved storage technology has led to the development of video-on-demand services to the desktop that provide interactive controls and deliver Client-selected video information on a Client-specified schedule. This dissertation presents the design of a video-on-demand system for Asynchronous Transfer Mode (ATM) networks, incorporating an optimised topology for the nodes in the system and an architecture for Quality of Service (QoS). The system is called ATOM which stands for Asynchronous Transfer Mode Objects. Real-time video playback over a network consumes large bandwidth and requires strict bounds on delay and error in order to satisfy the visual and auditory needs of the user. Streamed video is a fundamentally different type of traffic to conventional IP (Internet Protocol) data since files are viewed in real-time, not downloaded and then viewed. This streaming data must arrive at the Client decoder when needed or it loses its interactive value. Characteristics of multimedia data are investigated including the use of compression to reduce the excessive bit rates and storage requirements of digital video. The suitability of MPEG-1 for video-on-demand is presented. Having considered the bandwidth, delay and error requirements of real-time video, the next step in designing the system is to evaluate current models of video-on-demand. The distributed nature of four such models is considered, focusing on how Clients discover Servers and locate videos. This evaluation eliminates a centralized approach in which Servers have no logical or physical connection to any other Servers in the network and also introduces the concept of a selection strategy to find alternative Servers when Servers are fully loaded. During this investigation, it becomes clear that another entity (called a Broker) could provide a central repository for Server information. Clients have logical access to all videos on every Server simply by connecting to a Broker. The ATOM Model for distributed video-on-demand is then presented by way of a diagram of the topology showing the interconnection of Servers, Brokers and Clients; a description of each node in the system; a list of the connectivity rules; a description of the protocol; a description of the Server selection strategy and the protocol if a Broker fails. A sample network is provided with an example of video selection and design issues are raised and solved including how nodes discover each other, a justification for using a mesh topology for the Broker connections, how Connection Admission Control (CAC) is achieved, how customer billing is achieved and how information security is maintained. A calculation of the number of Servers and Brokers required to service a particular number of Clients is presented. The advantages of ATOM are described. The underlying distributed connectivity is abstracted away from the Client. Redundant Server/Broker connections are eliminated and the total number of connections in the system are minimized by the rule stating that Clients and Servers may only connect to one Broker at a time. This reduces the total number of Switched Virtual Circuits (SVCs) which are a performance hindrance in ATM. ATOM can be easily scaled by adding more Servers which increases the total system capacity in terms of storage and bandwidth. In order to transport video satisfactorily, a guaranteed end-to-end Quality of Service architecture must be in place. The design methodology for such an architecture is investigated starting with a review of current QoS architectures in the literature which highlights important definitions including a flow, a service contract and flow management. A flow is a single media source which traverses resource modules between Server and Client. The concept of a flow is important because it enables the identification of the areas requiring consideration when designing a QoS architecture. It is shown that ATOM adheres to the principles motivating the design of a QoS architecture, namely the Integration, Separation and Transparency principles. The issue of mapping human requirements to network QoS parameters is investigated and the action of a QoS framework is introduced, including several possible causes of QoS degradation. The design of the ATOM Quality of Service Architecture (AQOSA) is then presented. AQOSA consists of 11 modules which interact to provide end-to-end QoS guarantees for each stream. Several important results arise from the design. It is shown that intelligent choice of stored videos in respect of peak bandwidth can improve overall system capacity. The concept of disk striping over a disk array is introduced and a Data Placement Strategy is designed which eliminates disk hot spots (i.e. Overuse of some disks whilst others lie idle.) A novel parameter (the B-P Ratio) is presented which can be used by the Server to predict future bursts from each video stream. The use of Traffic Shaping to decrease the load on the network from each stream is presented. Having investigated four algorithms for rewind and fast-forward in the literature, a rewind and fast-forward algorithm is presented. The method produces a significant decrease in bandwidth, and the resultant stream is very constant, reducing the chance that the stream will add to network congestion. The C++ classes of the Server, Broker and Client are described emphasizing the interaction between classes. The use of ATOM in the Virtual Private Network and the multimedia teaching laboratory is considered. Conclusions and recommendations for future work are presented. It is concluded that digital video applications require high bandwidth, low error, low delay networks; a video-on-demand system to support large Client volumes must be distributed, not centralized; control and operation (transport) must be separated; the number of ATM Switched Virtual Circuits (SVCs) must be minimized; the increased connections caused by the Broker mesh is justified by the distributed information gain; a Quality of Service solution must address end-to-end issues. It is recommended that a web front-end for Brokers be developed; the system be tested in a wide area A TM network; the Broker protocol be tested by forcing failure of a Broker and that a proprietary file format for disk striping be implemented

    Quality of Service Controlled Multimedia Transport Protocol

    Get PDF
    PhDThis research looks at the design of an open transport protocol that supports a range of services including multimedia over low data-rate networks. Low data-rate multimedia applications require a system that provides quality of service (QoS) assurance and flexibility. One promising field is the area of content-based coding. Content-based systems use an array of protocols to select the optimum set of coding algorithms. A content-based transport protocol integrates a content-based application to a transmission network. General transport protocols form a bottleneck in low data-rate multimedia communicationbsy limiting throughpuot r by not maintainingt iming requirementsT. his work presents an original model of a transport protocol that eliminates the bottleneck by introducing a flexible yet efficient algorithm that uses an open approach to flexibility and holistic architectureto promoteQ oS.T he flexibility andt ransparenccyo mesi n the form of a fixed syntaxt hat providesa seto f transportp rotocols emanticsT. he mediaQ oSi s maintained by defining a generic descriptor. Overall, the structure of the protocol is based on a single adaptablea lgorithm that supportsa pplication independencen, etwork independencea nd quality of service. The transportp rotocol was evaluatedth rougha set of assessmentos:f f-line; off-line for a specific application; and on-line for a specific application. Application contexts used MPEG-4 test material where the on-line assessmenuts eda modified MPEG-4 pl; yer. The performanceo f the QoSc ontrolledt ransportp rotocoli s often bettert hano thers chemews hen appropriateQ oS controlledm anagemenatl gorithmsa re selectedT. his is shownf irst for an off-line assessmenwt here the performancei s compared between the QoS controlled multiplexer,a n emulatedM PEG-4F lexMux multiplexers chemea, ndt he targetr equirements. The performanceis also shownt o be better in a real environmentw hen the QoS controlled multiplexeri s comparedw ith the real MPEG-4F lexMux scheme

    A framework for interactive end-user web automation

    Get PDF
    This research investigates the feasibility and usefulness of a Web-based model for end-user Web automation. The aim is to empower end users to automate their Web interactions. Web automation is defined here as the study of theoretical and practical techniques for applying an end-user programming model to enable the automation of Web tasks, activities, or interactions. To date, few tools address the issue of Web automation; moreover, their functionality and usage are limited. A novel model is presented, which combines end-user programming techniques and the software tools philosophy with the vision of the “Web as a platform.” The model provided a Web-based environment that enables the rapid creation of efficient and useful Web-oriented automation tools. It consists of a command line for the Web, a shell scripting language, and a repository of Web commands. A framework called Web2Sh (Web 2.0 Shell) has been implemented, which includes the design and implementation of scripting language (WSh) enabling end users to create and customise Web commands. A number of Web2Sh-core Web commands were implemented. There are two techniques for extending the system: developers can implement new core Web commands, and the use of WSh by end users to connect, customise, and parameterise Web commands to create new commands. The feasibility and the usefulness of the proposed model have been demonstrated by implementing several automation scripts using Web2Sh, and by a case study based experiment that was carried out by volunteered participants. The implemented Web2Sh framework provided a novel and realistic environment for creating, customising, and running Web-oriented automation tools

    Design of a Health Monitoring Device

    Get PDF
    Home medical monitoring systems allow care providers to reduce their patient load, but no available systems offer portable operation. This effectively tethers patients to a specific location. In conjunction with the University of Limerick, our team desiged and implemented a proof-of-concept portable medical monitor able to transfer medical data wirelessly. Our completed project supports USB and 802.11b, includes a display and basic user interface, and runs Linux, making it a highly flexible platform for future progression toward marketability

    Video Content Understanding Using Text

    Get PDF
    The rise of the social media and video streaming industry provided us a plethora of videos and their corresponding descriptive information in the form of concepts (words) and textual video captions. Due to the mass amount of available videos and the textual data, today is the best time ever to study the Computer Vision and Machine Learning problems related to videos and text. In this dissertation, we tackle multiple problems associated with the joint understanding of videos and text. We first address the task of multi-concept video retrieval, where the input is a set of words as concepts, and the output is a ranked list of full-length videos. This approach deals with multi-concept input and prolonged length of videos by incorporating multi-latent variables to tie the information within each shot (short clip of a full-video) and across shots. Secondly, we address the problem of video question answering, in which, the task is to answer a question, in the form of Fill-In-the-Blank (FIB), given a video. Answering a question is a task of retrieving a word from a dictionary (all possible words suitable for an answer) based on the input question and video. Following the FIB problem, we introduce a new problem, called Visual Text Correction (VTC), i.e., detecting and replacing an inaccurate word in the textual description of a video. We propose a deep network that can simultaneously detect an inaccuracy in a sentence while benefiting 1D-CNNs/LSTMs to encode short/long term dependencies, and fix it by replacing the inaccurate word(s). Finally, as the last part of the dissertation, we propose to tackle the problem of video generation using user input natural language sentences. Our proposed video generation method constructs two distributions out of the input text, corresponding to the first and last frames latent representations. We generate high-fidelity videos by interpolating latent representations and a sequence of CNN based up-pooling blocks

    Using embedded hardware monitor cores in critical computer systems

    Get PDF
    The integration of FPGA devices in many different architectures and services makes monitoring and real time detection of errors an important concern in FPGA system design. A monitor is a tool, or a set of tools, that facilitate analytic measurements in observing a given system. The goal of these observations is usually the performance analysis and optimisation, or the surveillance of the system. However, System-on-Chip (SoC) based designs leave few points to attach external tools such as logic analysers. Thus, an embedded error detection core that allows observation of critical system nodes (such as processor cores and buses) should enforce the operation of the FPGA-based system, in order to prevent system failures. The core should not interfere with system performance and must ensure timely detection of errors. This thesis is an investigation onto how a robust hardware-monitoring module can be efficiently integrated in a target PCI board (with FPGA-based application processing features) which is part of a critical computing system. [Continues.

    Colonial film: moving images of the British Empire

    Get PDF
    Between 2009 and 2010 I was employed as a postdoctoral researcher on the AHRC-funded project, Colonial Film: Moving Images of the British Empire. The primary outcome of this project was a database detailing the colonial films held by BFI, the Imperial War Museum and the British Empire and Commonwealth Museum. Many of these films are not widely known, and the project provided the first thorough documentation of these materials. I was employed to write 95 1,000-word essays about selected films from this database. These essays were broken down into context and analysis of the films and were reviewed by archivists at the relevant institutions, as well as by the project’s co-directors, Colin MacCabe (Pittsburgh) and Lee Grieveson (UCL)
    corecore