1,218 research outputs found
Improvement of fingerprint retrieval by a statistical classifier
The topics of fingerprint classification, indexing, and retrieval have been studied extensively in the past decades. One problem faced by researchers is that in all publicly available fingerprint databases, only a few fingerprint samples from each individual are available for training and testing, making it inappropriate to use sophisticated statistical methods for recognition. Hence most of the previous works resorted to simple -nearest neighbor (-NN) classification. However, the -NN classifier has the drawbacks of being comparatively slow and less accurate. In this paper, we tackle this problem by first artificially expanding the set of training samples using our previously proposed spatial modeling technique. With the expanded training set, we are then able to employ a more sophisticated classifier such as the Bayes classifier for recognition. We apply the proposed method to the problem of one-to- fingerprint identification and retrieval. The accuracy and speed are evaluated using the benchmarking FVC 2000, FVC 2002, and NIST-4 databases, and satisfactory retrieval performance is achieved. © 2010 IEEE.published_or_final_versio
Recognition of handwritten Chinese characters by combining regularization, Fisher's discriminant and distorted sample generation
Proceedings of the 10th International Conference on Document Analysis and Recognition, 2009, p. 1026–1030The problem of offline handwritten Chinese character recognition has been extensively studied by many researchers and very high recognition rates have been reported. In this paper, we propose to further boost the recognition rate by incorporating a distortion model that artificially generates a huge number of virtual training samples from existing ones. We achieve a record high recognition rate of 99.46% on the ETL-9B database. Traditionally, when the dimension of the feature vector is high and the number of training samples is not sufficient, the remedies are to (i) regularize the class covariance matrices in the discriminant functions, (ii) employ Fisher's dimension reduction technique to reduce the feature dimension, and (iii) generate a huge number of virtual training samples from existing ones. The second contribution of this paper is the investigation of the relative effectiveness of these three methods for boosting the recognition rate. © 2009 IEEE.published_or_final_versio
Quality-of-service routing with two concave constraints
Routing is a process of finding a network path from a source node to a destination node. A good routing protocol should find the "best path" from a source to a destination. When there are independent constraints to be considered, the "best path" is not well-defined. In our previous work, we developed a line segment representation for Quality-of-Service routing with bandwidth and delay requirements. In this paper, we propose how to adopt the line segment when a request has two concave constraints. We have developed a series of operations for constructing routing tables under the distance-vector protocol. We evaluate the performance through extensive simulations. ©2008 IEEE.published_or_final_versio
THE PENALTY RULE: A MODERN INTERPRETATION
This paper focuses on the common law doctrine of the penalty rule and the recent Supreme Court decision in Cavendish Square Holding v Makdessi and ParkingEye v Beavis. The state of the penalty rule prior to the judgment was unsatisfactory and criticized by both commentators and practitioners alike. Its indiscriminate application and unclear criteria was a needless source of uncertainty for both contracting parties and lawyers. Nevertheless, their Lordships in Cavendish refused to abolish the penalty rule but acknowledged its limited application in the modern commercial context. This paper accordingly aims to justify the continued existence of the doctrine on theoretical grounds within the English private law framework despite its practical obsolescence.
A paracasting model for concurrent access to replicated content
We propose a framework to study how to download effectively a copy of the same document from a set of replicated servers. A generalized application-layer anycasting, known as paracasting, has been proposed to advocate concurrent access of a subset of replicated servers to satisfy cooperatively a client's request. Each participating server satisfies the request in part by transmitting a subset of the requested file to the client. The client can recover the complete file when different parts of the file sent from the participating servers are received. This framework allows us to estimate the average time to download a file from the set of homogeneous replicated servers, and the request blocking probability when each server can accept and serve a finite number of concurrent. requests. Our results show that the file download time drops when a request is served concurrently by a larger number of homogeneous replicated servers, although the performance improvement quickly saturates when the number of servers used increases. If the total number of requests that a server can handle simultaneously is finite, the request blocking probability increases with the number of replicated servers used to serve a request concurrently. Therefore, paracasting is effective in using a small number of servers, say, up to four, to serve a request concurrently.published_or_final_versio
A resequencing model for high speed networks
In this paper, we propose a framework to study the resequencing mechanism in high speed networks. This framework allows us to estimate the packet resequencing delay, the total packet delay, and the resequencing buffer occupancy distributions when data traffic is dispersed on multiple disjoint paths. In contrast to most of the existing work, the estimation of the end-to-end path delay distribution is decoupled from the queueing model for resequencing. This leads to a simple yet general model, which can be used with other measurement-based tools for estimating the end-to-end path delay distribution to find an optimal split of traffic. We consider a multiple-node M/M/1 tandem network as a path model. When end-to-end path delays are Gaussian distributed, our results show that the packet resequencing delay, the total packet delay, and the resequencing buffer occupancy drop when the traffic is spread over a larger number of homogeneous paths, although the network performance improvement quickly saturates when the number of paths used increases. We find that the number of paths used in multipath routing should be small, say up to three. Besides, an optimal split of traffic occurs at paths with equal loads.published_or_final_versio
Does it hurt when others prosper?: Exploring the impact of heterogeneous reordering robustness of TCP
The congestion control mechanisms in the standardized Transmission Control Protocol (TCP) may misinterpret packet reordering as congestive loss, leading to spurious congestion response and under-utilization of network capacity. Therefore, many TCP enhancements have been proposed to better differentiate between packet reordering and congestive loss, in order to enhance the reordering robustness (RR) of TCP. Since such enhancements are incrementally deployed, it is important to study the interactions of TCP flows with heterogeneous RR. This paper presents the first systematic study of such interactions by exploring how changing RR of TCP flows influences the bandwidth sharing among these flows. We define the quantified RR (QRR) of a TCP flow as the probability that packet reordering causes congestion response. We analyze the variation of bandwidth sharing as QRR changes. This leads to the discovery of several interesting properties. Most notably, we discover the counter-intuitive result that changing one flow's QRR does not affect its competing flows in certain network topologies. We further characterize the deviation, from the ideal case of bandwidth sharing, as RR changes. We find that enhancing RR of a flow may increase, rather than decrease, the deviation in some typical network scenarios. © 2013 IEEE.published_or_final_versio
Communication-oriented smart grid framework
Upgrading the existing electricity grids into smart grids relies heavily on the development of information and communication technology which supports a highly reliable real-time monitoring and control system as well as coordination of various electricity utilities and market participants. In this upgrading process, smart grid communication is the key to success, and a simple but complete, innovative but compatible high-level communication-oriented smart grid framework is needed. This paper proposes a simple and flexible three-entity framework, so that devices employing the existing technologies are supported and can interoperate with those employing new technologies. © 2011 IEEE.published_or_final_versionThe 2nd IEEE International Conference on Smart Grid Communications (SmartGridComm 2011), Brussels, Belgium, 17-20 October 2011. In Proceedings of 2nd SmartGridComm, 2011, p. 61-6
Adaptive topology-transparent distributed scheduling in wireless networks
Proceedings of the IEEE International Conference on Communications, 2010, p. 1-5Transmission scheduling is a key design problem in wireless multi-hop networks. Many transmission scheduling algorithms have been proposed to maximize the spatial reuse and minimize the time division multiple access (TDMA) frame length. Most of the scheduling algorithms are topology-dependent. They are generally graph-based and depend on the exact network topology information. Thus, they cannot adapt well to the dynamic wireless environment. In contrast, topology-transparent TDMA scheduling algorithms do not need detailed topology information. However, these algorithms offer very low minimum throughput. The objective of this work is to propose an adaptive topology-transparent scheduling algorithm to offer better throughput performance. With our algorithm, each node finds a transmission schedule so as to reduce the transmission conflicts and adapt better to the changing network environment. The simulation results show that the performance of our algorithm is better than the existing topology-transparent algorithms. ©2010 IEEE.published_or_final_versio
Robust dispatch with power flow routing and renewables
The uncertainty and variability of renewable energy sources are challenging problems in the operation of power systems. In this paper, we focus on the robust dispatch of generators to overcome the uncertainty of renewable power predictions. The energy management costs of supply, spinning reserve, and power losses, are jointly optimized by the proposed robust optimal power flow (OPF) method with a column-And-constraint generation algorithm. Conic relaxation is applied to the non-convex alternating-current power flow regions with the phase angle constraints for loops retained by linear approximation. The proposed method allows us to solve the robust OPF problem efficiently with good accuracy and to incorporate power flow controllers and routers into the OPF framework. Numerical results on the IEEE Reliability Test System show the efficacy of our robust dispatch strategy in guaranteeing immunity against uncertain renewable generation, as well as in reducing the energy management costs through power flow routing. © 2015 IEEE.postprin
- …