42 research outputs found

    Watermarking technique for wireless multimedia sensor networks: A state of the art

    Get PDF
    Wireless multimedia sensor networks (WMSNs) are an emerging type of sensor network which contain sensor nodes equipped with microphones, cameras, and other sensors that produce multimedia content. These networks have the potential to enable a large class of applications ranging from military to modern healthcare. Multimedia nodes are susceptible to various types of attack, such as cropping, compression, or even physical capture and sensor replacement. Hence, security becomes an important issue in WMSNs. However, given the fact that sensors are resource constrained, the traditional intensive security algorithms are not well suited for WMSNs. This makes the traditional security techniques, based on data encryption, not very suitable for WMSNs. Watermarking techniques are usually computationally lightweight and do not require much memory resources. These techniques are being considered as an attractive alternative to the traditional techniques, because of their light resource requirements. The objective of this paper is to present a critical analysis of the existing state-of-the-art watermarking algorithms developed for WMSNs

    Learning in Feedforward Neural Networks Accelerated by Transfer Entropy

    Get PDF
    Current neural networks architectures are many times harder to train because of the increasing size and complexity of the used datasets. Our objective is to design more efficient training algorithms utilizing causal relationships inferred from neural networks. The transfer entropy (TE) was initially introduced as an information transfer measure used to quantify the statistical coherence between events (time series). Later, it was related to causality, even if they are not the same. There are only few papers reporting applications of causality or TE in neural networks. Our contribution is an information-theoretical method for analyzing information transfer between the nodes of feedforward neural networks. The information transfer is measured by the TE of feedback neural connections. Intuitively, TE measures the relevance of a connection in the network and the feedback amplifies this connection. We introduce a backpropagation type training algorithm that uses TE feedback connections to improve its performance

    Efficient Semantic Segmentation for Resource-Constrained Applications with Lightweight Neural Networks

    Get PDF
    This thesis focuses on developing lightweight semantic segmentation models tailored for resource-constrained applications, effectively balancing accuracy and computational efficiency. It introduces several novel concepts, including knowledge sharing, dense bottleneck, and feature re-usability, which enhance the feature hierarchy by capturing fine-grained details, long-range dependencies, and diverse geometrical objects within the scene. To achieve precise object localization and improved semantic representations in real-time environments, the thesis introduces multi-stage feature aggregation, feature scaling, and hybrid-path attention methods

    Learning Shapes Spontaneous Activity Itinerating over Memorized States

    Get PDF
    Learning is a process that helps create neural dynamical systems so that an appropriate output pattern is generated for a given input. Often, such a memory is considered to be included in one of the attractors in neural dynamical systems, depending on the initial neural state specified by an input. Neither neural activities observed in the absence of inputs nor changes caused in the neural activity when an input is provided were studied extensively in the past. However, recent experimental studies have reported existence of structured spontaneous neural activity and its changes when an input is provided. With this background, we propose that memory recall occurs when the spontaneous neural activity changes to an appropriate output activity upon the application of an input, and this phenomenon is known as bifurcation in the dynamical systems theory. We introduce a reinforcement-learning-based layered neural network model with two synaptic time scales; in this network, I/O relations are successively memorized when the difference between the time scales is appropriate. After the learning process is complete, the neural dynamics are shaped so that it changes appropriately with each input. As the number of memorized patterns is increased, the generated spontaneous neural activity after learning shows itineration over the previously learned output patterns. This theoretical finding also shows remarkable agreement with recent experimental reports, where spontaneous neural activity in the visual cortex without stimuli itinerate over evoked patterns by previously applied signals. Our results suggest that itinerant spontaneous activity can be a natural outcome of successive learning of several patterns, and it facilitates bifurcation of the network when an input is provided

    線虫の塩走化性回路を制御する神経活動に関する研究

    Get PDF
    筑波大学 (University of Tsukuba)201
    corecore