34 research outputs found

    Efficient Database Distribution Using Local Search Algorithm

    Get PDF
    A problem in railway database is identied. Focus of the problem is to reduce the average response time for all the read and write queries to the railway database. One way of doing this is by opening more than one database servers and distributing the database across these servers to improve the performance. In this work we are proposing an ecient distribution of the database across these servers considering read and write request frequencies at all locations. The problem of database distribution across dierent locations is mapped to the well studied problem called Uncapacitated Facility Location(UFL) problem. Various techniques such as greedy approach, LP rounding technique, primal-dual technique and local search have been proposed to tackle this problem. Of those, we are using local search technique in this work. In particular, poly- nomial version of the local search approximation algorithm is used to solve the railway database problem. Distributed database is implemented using postgresql database server and jboss appli- cation server is used to manage the global transaction. On this architecture, database is distributed using the local optimal solution obtained by local search algorithm and it is compared with other solutions in terms of the average response time for the read and write requests

    Predictive Analytics For Controlling Tax Evasion

    Get PDF
    Tax evasion is an illegal practice where a person or a business entity intentionally avoids paying his/her true tax liability. Any business entity is required by the law to file their tax return statements following a periodical schedule. Avoiding to file the tax return statement is one among the most rudimentary forms of tax evasion. The dealers committing tax evasion in such a way are called return defaulters. We constructed a logistic regression model that predicts with high accuracy whether a business entity is a potential return defaulter for the upcoming tax-filing period. For the same, we analyzed the effect of the amount of sales/purchases transactions among the business entities (dealers) and the mean absolute deviation (MAD) value of the �rst digit Benford's analysis on sales transactions by a business entity. We developed and deployed this model for the commercial taxes department, government of Telangana, India. Another technique, which is a much more sophisticated one, used for tax evasion, is known as Circular trading. Circular trading is a fraudulent trading scheme used by notorious tax evaders with the motivation to trick the tax enforcement authorities from identifying their suspicious transactions. Dealers make use of this technique to collude with each other and hence do heavy illegitimate trade among themselves to hide suspicious sales transactions. We developed an algorithm to detect the group of colluding dealers who do heavy illegitimate trading among themselves. For the same, we formulated the problem as finding clusters in a weighted directed graph. Novelty of our approach is that we used Benford's analysis to define weights and defined a measure similar to F1 score to find similarity between two clusters. The proposed algorithm is run on the commercial tax data set, and the results obtained contains a group of several colluding dealers

    An algorithmic approach to handle circular trading in commercial taxing system

    Get PDF
    This article presents fraudulent activity that are done by the unscrupulous desire of people to make the personal benefits by manipulating the tax in taxing system. Taxpayers manipulate the money paid to the tax authorities through avoidance and evasion activities. In this paper, we deal with a specific technique used by the tax-evaders known as a circular trading. We define an algorithm for detection and analysis of circular trade. To detect these circular trade, we have modeled whole system as a directed graph with actors being vertices and the transactions among them as directed edges. We have proposed an algorithm for detecting these circular trade. The commercial tax dataset is given by Telangana, India. This dataset contains the transaction details of participants involved in a known circular trade

    Algorithms for Power Aware Testing of Nanometer Digital ICs

    Get PDF
    At-speed testing of deep-submicron digital very large scale integrated (VLSI) circuits has become mandatory to catch small delay defects. Now, due to continuous shrinking of complementary metal oxide semiconductor (CMOS) transistor feature size, power density grows geometrically with technology scaling. Additionally, power dissipation inside a digital circuit during the testing phase (for test vectors under all fault models (Potluri, 2015)) is several times higher than its power dissipation during the normal functional phase of operation. Due to this, the currents that flow in the power grid during the testing phase, are much higher than what the power grid is designed for (the functional phase of operation). As a result, during at-speed testing, the supply grid experiences unacceptable supply IR-drop, ultimately leading to delay failures during at-speed testing. Since these failures are specific to testing and do not occur during functional phase of operation of the chip, these failures are usually referred to false failures, and they reduce the yield of the chip, which is undesirable. In nanometer regime, process parameter variations has become a major problem. Due to the variation in signalling delays caused by these variations, it is important to perform at-speed testing even for stuck faults, to reduce the test escapes (McCluskey and Tseng, 2000; Vorisek et al., 2004). In this context, the problem of excessive peak power dissipation causing false failures, that was addressed previously in the context of at-speed transition fault testing (Saxena et al., 2003; Devanathan et al., 2007a,b,c), also becomes prominent in the context of at-speed testing of stuck faults (Maxwell et al., 1996; McCluskey and Tseng, 2000; Vorisek et al., 2004; Prabhu and Abraham, 2012; Potluri, 2015; Potluri et al., 2015). It is well known that excessive supply IR-drop during at-speed testing can be kept under control by minimizing switching activity during testing (Saxena et al., 2003). There is a rich collection of techniques proposed in the past for reduction of peak switching activity during at-speed testing of transition/delay faults ii in both combinational and sequential circuits. As far as at-speed testing of stuck faults are concerned, while there were some techniques proposed in the past for combinational circuits (Girard et al., 1998; Dabholkar et al., 1998), there are no techniques concerning the same for sequential circuits. This thesis addresses this open problem. We propose algorithms for minimization of peak switching activity during at-speed testing of stuck faults in sequential digital circuits under the combinational state preservation scan (CSP-scan) architecture (Potluri, 2015; Potluri et al., 2015). First, we show that, under this CSP-scan architecture, when the test set is completely specified, the peak switching activity during testing can be minimized by solving the Bottleneck Traveling Salesman Problem (BTSP). This mapping of peak test switching activity minimization problem to BTSP is novel, and proposed for the first time in the literature. Usually, as circuit size increases, the percentage of don’t cares in the test set increases. As a result, test vector ordering for any arbitrary filling of don’t care bits is insufficient for producing effective reduction in switching activity during testing of large circuits. Since don’t cares dominate the test sets for larger circuits, don’t care filling plays a crucial role in reducing switching activity during testing. Taking this into consideration, we propose an algorithm, XStat, which is capable of performing test vector ordering while preserving don’t care bits in the test vectors, following which, the don’t cares are filled in an intelligent fashion for minimizing input switching activity, which effectively minimizes switching activity inside the circuit (Girard et al., 1998). Through empirical validation on benchmark circuits, we show that XStat minimizes peak switching activity significantly, during testing. Although XStat is a very powerful heuristic for minimizing peak input-switchingactivity, it will not guarantee optimality. To address this issue, we propose an algorithm that uses Dynamic Programming to calculate the lower bound for a given sequence of test vectors, and subsequently uses a greedy strategy for filling don’t cares in this sequence to achieve this lower bound, thereby guaranteeing optimality. This algorithm, which we refer to as DP-fill in this thesis, provides the globally optimal solution for minimizing peak input-switching-activity and also is the best known in the literature for minimizing peak input-switching-activity during testing. The proof of optimality of DP-fill in minimizing peak input-switching-activity is also provided in this thesis

    DP-fill: a dynamic programming approach to X-filling for minimizing peak test power in scan tests

    Get PDF
    At-speed testing is crucial to catch small delay defects that occur during the manufacture of high performance digital chips. Launch-Off-Capture (LOC) and Launch-Off-Shift (LOS) are two prevalently used schemes for this purpose. LOS scheme achieves higher fault coverage while consuming lesser test time over LOC scheme, but dissipates higher power during the capture phase of the at-speed test. Excessive IR-drop during capture phase on the power grid causes false delay failures leading to significant yield reduction that is unwarranted. As reported in literature, an intelligent filling of don't care bits (X-filling) in test cubes has yielded significant power reduction. Given that the tests output by automatic test pattern generation (ATPG) tools for big circuits have large number of don't care bits, the X-filling technique is very effective for them. Assuming that the design for testability (DFT) scheme preserves the state of the combinational logic between capture phases of successive patterns, this paper maps the problem of optimal X-filling for peak power minimization during LOS scheme to a variant of interval coloring problem and proposes a dynamic programming (DP) algorithm for the same along with a theoretical proof for its optimality. To the best of our knowledge, this is the first ever reported X-filling algorithm that is optimal. The proposed algorithm when experimented on ITC99 benchmarks produced peak power savings of up to 34% over the best known low power X-filling algorithm for LOS testing. Interestingly, it is observed that the power savings increase with the size of the circuit

    Nash equilibria in fisher market

    Get PDF
    Much work has been done on the computation of market equilibria. However due to strategic play by buyers, it is not clear whether these are actually observed in the market. Motivated by the observation that a buyer may derive a better payoff by feigning a different utility function and thereby manipulating the Fisher market equilibrium, we formulate the Fisher market game in which buyers strategize by posing different utility functions. We show that existence of a conflict-free allocation is a necessary condition for the Nash equilibria (NE) and also sufficient for the symmetric NE in this game. There are many NE with very different payoffs, and the Fisher equilibrium payoff is captured at a symmetric NE. We provide a complete polyhedral characterization of all the NE for the two-buyer market game. Surprisingly, all the NE of this game turn out to be symmetric and the corresponding payoffs constitute a piecewise linear concave curve. We also study the correlated equilibria of this game and show that third-party mediation does not help to achieve a better payoff than NE payoffs

    An algorithmic approach to handle circular trading in commercial taxing system

    Get PDF
    Tax manipulation comes in a variety of forms with different motivations and of varying complexities. In this paper, we deal with a specific technique used by tax-evaders known as circular trading. In particular, we define algorithms for the detection and analysis of circular trade. To achieve this, we have modelled the whole system as a directed graph with the actors being vertices and the transactions among them as directed edges. We illustrate the results obtained after running the proposed algorithm on the commercial tax dataset of the government of Telangana, India, which contains the transaction details of a set of participants involved in a known circular trade

    A Graph Theoretical Approach for Identifying Fraudulent Transactions in Circular Trading

    Get PDF
    Circular trading is an infamous technique used by tax evaders to confuse tax enforcement officers from detecting suspicious transactions. Dealers using this technique superimpose suspicious transactions by several illegitimate sales transactions in a circular manner. In this paper, we address this problem by developing an algorithm that detects circular trading and removes the illegitimate cycles to uncover the suspicious transactions. We formulate the problem as finding and then deleting specific type of cycles in a directed edge-labeled multigraph. We run this algorithm on the commercial tax data set provided by the government of Telangana, India, and discovered several suspicious transactions
    corecore