4,333 research outputs found

    Design and development of a smart panel with five decentralised control units for the reduction of vibration and sound radiation

    No full text
    This Technical Report discusses the design and the construction of a smart panel with five decentralised direct velocity feedback control units in order to reduce the vibration of the panel dominated by well separated low frequency resonances. Each control unit consists of an accelerometer sensor and a piezoelectric patch strain actuator. The integrated accelerometer signal is fed back to the actuator via a fixed negative control gain. In this way the actuator generates a control excitation proportional and opposite to the measured transverse velocity of the panel so that it produces active damping on the panel. First the open loop frequency response function between the sensor and the actuator of a single channel has been studied and an analogue controller has been designed and tested in order to improve the stability of this control system. Following the stability of all five control units has been assessed using the generalised Nyquist criterion. Finally the performances of the smart panel have been tested with reference to the reduction of the vibrations at the error positions and with reference to the reduction of the radiated sound. Finally in appendix to this Report, a parametric study is presented on the properties of sensor-actuator FRFs measured with different types of piezoelectric patch actuators. The results of this parametric study have been used in order to choose the actuators to be used for the construction of the smart pane

    Strain transducers for active control - lumped parameter model

    No full text

    Multimodal segmentation of lifelog data

    Get PDF
    A personal lifelog of visual and audio information can be very helpful as a human memory augmentation tool. The SenseCam, a passive wearable camera, used in conjunction with an iRiver MP3 audio recorder, will capture over 20,000 images and 100 hours of audio per week. If used constantly, very soon this would build up to a substantial collection of personal data. To gain real value from this collection it is important to automatically segment the data into meaningful units or activities. This paper investigates the optimal combination of data sources to segment personal data into such activities. 5 data sources were logged and processed to segment a collection of personal data, namely: image processing on captured SenseCam images; audio processing on captured iRiver audio data; and processing of the temperature, white light level, and accelerometer sensors onboard the SenseCam device. The results indicate that a combination of the image, light and accelerometer sensor data segments our collection of personal data better than a combination of all 5 data sources. The accelerometer sensor is good for detecting when the user moves to a new location, while the image and light sensors are good for detecting changes in wearer activity within the same location, as well as detecting when the wearer socially interacts with others

    High Accuracy Human Activity Monitoring using Neural network

    Full text link
    This paper presents the designing of a neural network for the classification of Human activity. A Triaxial accelerometer sensor, housed in a chest worn sensor unit, has been used for capturing the acceleration of the movements associated. All the three axis acceleration data were collected at a base station PC via a CC2420 2.4GHz ISM band radio (zigbee wireless compliant), processed and classified using MATLAB. A neural network approach for classification was used with an eye on theoretical and empirical facts. The work shows a detailed description of the designing steps for the classification of human body acceleration data. A 4-layer back propagation neural network, with Levenberg-marquardt algorithm for training, showed best performance among the other neural network training algorithms.Comment: 6 pages, 4 figures, 4 Tables, International Conference on Convergence Information Technology, pp. 430-435, 2008 Third International Conference on Convergence and Hybrid Information Technology, 200

    Multisensor Data Fusion for Human Activities Classification and Fall Detection

    Get PDF
    Significant research exists on the use of wearable sensors in the context of assisted living for activities recognition and fall detection, whereas radar sensors have been studied only recently in this domain. This paper approaches the performance limitation of using individual sensors, especially for classification of similar activities, by implementing information fusion of features extracted from experimental data collected by different sensors, namely a tri-axial accelerometer, a micro-Doppler radar, and a depth camera. Preliminary results confirm that combining information from heterogeneous sensors improves the overall performance of the system. The classification accuracy attained by means of this fusion approach improves by 11.2% compared to radar-only use, and by 16.9% compared to the accelerometer. Furthermore, adding features extracted from a RGB-D Kinect sensor, the overall classification accuracy increases up to 91.3%

    Step Detection Algorithm For Accurate Distance Estimation Using Dynamic Step Length

    Full text link
    In this paper, a new Smartphone sensor based algorithm is proposed to detect accurate distance estimation. The algorithm consists of two phases, the first phase is for detecting the peaks from the Smartphone accelerometer sensor. The other one is for detecting the step length which varies from step to step. The proposed algorithm is tested and implemented in real environment and it showed promising results. Unlike the conventional approaches, the error of the proposed algorithm is fixed and is not affected by the long distance. Keywords distance estimation, peaks, step length, accelerometer.Comment: this paper contains of 5 pages and 6 figure

    Interaction With Tilting Gestures In Ubiquitous Environments

    Full text link
    In this paper, we introduce a tilting interface that controls direction based applications in ubiquitous environments. A tilt interface is useful for situations that require remote and quick interactions or that are executed in public spaces. We explored the proposed tilting interface with different application types and classified the tilting interaction techniques. Augmenting objects with sensors can potentially address the problem of the lack of intuitive and natural input devices in ubiquitous environments. We have conducted an experiment to test the usability of the proposed tilting interface to compare it with conventional input devices and hand gestures. The experiment results showed greater improvement of the tilt gestures in comparison with hand gestures in terms of speed, accuracy, and user satisfaction.Comment: 13 pages, 10 figure
    corecore