6,548 research outputs found
Constraining the Number of Positive Responses in Adaptive, Non-Adaptive, and Two-Stage Group Testing
Group testing is a well known search problem that consists in detecting the
defective members of a set of objects O by performing tests on properly chosen
subsets (pools) of the given set O. In classical group testing the goal is to
find all defectives by using as few tests as possible. We consider a variant of
classical group testing in which one is concerned not only with minimizing the
total number of tests but aims also at reducing the number of tests involving
defective elements. The rationale behind this search model is that in many
practical applications the devices used for the tests are subject to
deterioration due to exposure to or interaction with the defective elements. In
this paper we consider adaptive, non-adaptive and two-stage group testing. For
all three considered scenarios, we derive upper and lower bounds on the number
of "yes" responses that must be admitted by any strategy performing at most a
certain number t of tests. In particular, for the adaptive case we provide an
algorithm that uses a number of "yes" responses that exceeds the given lower
bound by a small constant. Interestingly, this bound can be asymptotically
attained also by our two-stage algorithm, which is a phenomenon analogous to
the one occurring in classical group testing. For the non-adaptive scenario we
give almost matching upper and lower bounds on the number of "yes" responses.
In particular, we give two constructions both achieving the same asymptotic
bound. An interesting feature of one of these constructions is that it is an
explicit construction. The bounds for the non-adaptive and the two-stage cases
follow from the bounds on the optimal sizes of new variants of d-cover free
families and (p,d)-cover free families introduced in this paper, which we
believe may be of interest also in other contexts
Welfare state and social spending: assessing the effectiveness and the efficiency of European social policies in 22 EU countries
This paper aims at analysing the effectiveness and the efficiency of social public expenditure in 22 European countries. We present a basic theoretical framework connecting the choice of the level of social protection to the median voter’s preferences and the inefficiency of expenditure. To test it against real data, we construct performance and efficiency indicators. While the existing literature measures the performance of social policy restricting the analysis to its impact on inequality and the labour market, our index summarises the outcomes achieved in all sectors of social protection (family, health, labour market elderly, disabled, unemployment, inequality). Based on this, we find that the ranking of countries differs from those found in the literature. We then put together performance and the amount of expenditure needed to achieve it (to better compare countries, we use social public expenditure net of tax and transfers), constructing efficiency indicators and a production possibility frontier through the FHD method. We find that efficiency is not related to the size of public intervention. Rather, our results suggest that population size and the type of the welfare system might be more relevant factors: small countries tend to be more efficient than large ones and targeting all sectors of social policy tends to be more efficient than concentrating on some areas only
Efficient social policies with higher expenditure: an analysis for European countries
Based on the construction of two indicators to assess the relative
effectiveness and efficiency of European welfare policies, we show that the
variability of efficiency cannot be explained only by the amount of resources
devoted to social policies but also by the institutional environment. The OLS
regression shows that institutional variables- such as accountability and
honesty of public officials- have high significant effects on the efficiency
Innovation, competition and public procurement in the pre-commercial phase
Should the supply or the demand side bear the risk connected to innovation? The two polar cases identified in the literature are the supply push and the demand pull. The former is the typical one, with the supplier bearing the costs and obtaining the benefits from innovating. The latter is technology procurement, where the buyer takes the risk, by procuring the innovative good or service. With respect to this, pre-commercial procurement is a peculiar solution that can explain the debate found in the literature relative to its configuration either as a supply-side or a demand-side instrument. The separation from the commercial phase allows the procurer to take only (part of) the risks connected to R&D services. Also, competition among suppliers gives the opportunity of evaluating different solutions and to obtain, in the commercial phase, a lower price for the innovative good. The counterpart of all this is a large portion of risk being left to the supplier. As a consequence, suppliers need to obtain a larger share of the benefits of the innovation process. This economic reason, besides the legal restrictions on State aid, explains the need for a shared risks-shared benefits approach, centred on the agreements on the assignment of IPRs
A COMPARATIVE STUDY OF ALTERNATIVE ECONOMETRIC PACKAGES: AN APPLICATION TO ITALIAN DEPOSIT INTEREST RATES
In examining the determinants of Italian deposit interest rates, we compares alternative econometric packages for estimating panel data. We focus on bank deposits, one of the main forms Italian households use to invest their financial wealth. We survey the literature on deposit rates, with particular reference to the large number of US studies. The empirical analysis is based on more than 8,000 observations for the years 1990-1996. Bank interest rates are taken from the Central Credit Register. We consider the rates on current accounts, certificates of deposit, and total deposits. Other variables are obtained from the Banking Supervision1s statistical returns. We look at the influence on interest rates of the Herfindahl index, the number of banks in each province, the rate of growth in deposits, the custodial holdings of bonds, the ratio of banking costs to total assets.With this abundance of panel data, many different specifications have been estimated using the fixed- and random-effects models. Our purpose is to examine the caveats about numerical accuracy raised by McCullogh and Vinod, who are concerned that little attention is paid to numerical accuracy in the selection of econometric packages. We compare the numerical value of the estimates of three of the most popular econometric packages featuring built-in panel data estimation algorithms: LIMDEP, STATA, and TSP. As a numerical benchmark we used Modeleasy, a general-purpose language allowing matrix operations.The preliminary results look quite promising:1) fixed-effects algorithms are numerically the same to the available decimal places.2) random-effects algorithms yield slightly different results because of the method for computing the variance components.In addition, we compare the relative efficiency of the random-effects algorithms provided by the three packages. This is done by means of a set of suitably designed Monte Carlo experiments, varying the time span and the number of provinces taken into account.
Strategic interactions between monetary and fiscal authorities in a monetary union
In this paper we extend Nordhaus’ (1994) results to an environment which may represent the current European situation, characterised by a single monetary authority and several fiscal bodies. We show that: a) co-operation among national fiscal authorities is welfare improving only if they also co-operate with the central bank; b) when this condition is not satisfied, fiscal rules, as those envisaged in the Maastricht Treaty and in the Stability and Growth Pact, may work as co-ordination devices that improve welfare; c) the relationship between several treasuries and a single central bank makes the fiscal leadership solution collapse to the Nash one, so that, contrary to Nordhaus (1994) and Dixit and Luisa Lambertini (2001), when moving from the Nash to the Stackelberg solution, fiscal discipline no longer obtains. Also in this case we thus argue in favour of fiscal rules in a monetary union.Fiscal and monetary policy co-ordination; monetary union;international fiscal issues
Data Warehouse Design and Management: Theory and Practice
The need to store data and information permanently, for their reuse in later stages, is a very relevant problem in the modern world and now affects a large number of people and economic agents. The storage and subsequent use of data can indeed be a valuable source for decision making or to increase commercial activity. The next step to data storage is the efficient and effective use of information, particularly through the Business Intelligence, at whose base is just the implementation of a Data Warehouse. In the present paper we will analyze Data Warehouses with their theoretical models, and illustrate a practical implementation in a specific case study on a pharmaceutical distribution companyData warehouse, database, data model.
An Analisys of Business VPN Case Studies
A VPN (Virtual Private Network) simulates a secure private network through a shared public insecure infrastructure like the Internet. The VPN protocol provides a secure and reliable access from home/office on any networking technology transporting IP packets. In this article we study the standards for VPN implementation and analyze two case studies regarding a VPN between two routers and two firewalls.VPN; Network; Protocol.
- …
