346 research outputs found

    Traceability for Food Safety and Quality Assurance: Mandatory Systems Miss the Mark

    Get PDF
    Traceability systems are record-keeping systems that are primarily used to help keep foods with different attributes separate from one another. When information about a particular attribute of a food product is systematically recorded from creation through marketing, traceability for that attribute is established. Recently, policy makers in many countries have begun weighing the usefulness of mandatory traceability for managing such diverse problems as the threat of bio-terrorism, country-of-origin labelling, mad cow disease, and identification of genetically engineered foods. The question before policymakers is, When is mandatory traceability a useful and appropriate policy choice?Agricultural and Food Policy, Food Consumption/Nutrition/Food Safety,

    EXPLORATIVE STUDY ON THE CYBER-ATTACK SOURCE TRACEBACK TECHNOLOGIES FOR BRIGHT INTERNET

    Get PDF
    In order to cope with the various types of cyber-attacks in the Internet, several methods of tracking the source of attack have been developed. However, until recently, most of them are defensive security methods rather than preventive one. In order to settle the Bright Internet, which is still in its early stage, it is necessary to establish a technical source tracking method. For this, a standard and evaluation criteria are needed to determine which technology would be appropriate for the Bright Internet requirements. In this paper, we classify cyber-attack source traceback technologies and derive some criteria for the evaluation of the technologies for the Bright Internet. Using the criteria, we can evaluate existing traceback technologies from the perspective of the Bright Internet. In this article, we try to evaluate SAVA, PPM, iTrace, Controlled flooding, Input Debugging, Central Track, IPSec, SPIE(Hash-based), and Marking+Logging methods. Based on this research, future research will require in-depth verification of traceback technologies that reflects all the principles of the Bright Internet in practice

    A composable approach to design of newer techniques for large-scale denial-of-service attack attribution

    Get PDF
    Since its early days, the Internet has witnessed not only a phenomenal growth, but also a large number of security attacks, and in recent years, denial-of-service (DoS) attacks have emerged as one of the top threats. The stateless and destination-oriented Internet routing combined with the ability to harness a large number of compromised machines and the relative ease and low costs of launching such attacks has made this a hard problem to address. Additionally, the myriad requirements of scalability, incremental deployment, adequate user privacy protections, and appropriate economic incentives has further complicated the design of DDoS defense mechanisms. While the many research proposals to date have focussed differently on prevention, mitigation, or traceback of DDoS attacks, the lack of a comprehensive approach satisfying the different design criteria for successful attack attribution is indeed disturbing. Our first contribution here has been the design of a composable data model that has helped us represent the various dimensions of the attack attribution problem, particularly the performance attributes of accuracy, effectiveness, speed and overhead, as orthogonal and mutually independent design considerations. We have then designed custom optimizations along each of these dimensions, and have further integrated them into a single composite model, to provide strong performance guarantees. Thus, the proposed model has given us a single framework that can not only address the individual shortcomings of the various known attack attribution techniques, but also provide a more wholesome counter-measure against DDoS attacks. Our second contribution here has been a concrete implementation based on the proposed composable data model, having adopted a graph-theoretic approach to identify and subsequently stitch together individual edge fragments in the Internet graph to reveal the true routing path of any network data packet. The proposed approach has been analyzed through theoretical and experimental evaluation across multiple metrics, including scalability, incremental deployment, speed and efficiency of the distributed algorithm, and finally the total overhead associated with its deployment. We have thereby shown that it is realistically feasible to provide strong performance and scalability guarantees for Internet-wide attack attribution. Our third contribution here has further advanced the state of the art by directly identifying individual path fragments in the Internet graph, having adopted a distributed divide-and-conquer approach employing simple recurrence relations as individual building blocks. A detailed analysis of the proposed approach on real-life Internet topologies with respect to network storage and traffic overhead, has provided a more realistic characterization. Thus, not only does the proposed approach lend well for simplified operations at scale but can also provide robust network-wide performance and security guarantees for Internet-wide attack attribution. Our final contribution here has introduced the notion of anonymity in the overall attack attribution process to significantly broaden its scope. The highly invasive nature of wide-spread data gathering for network traceback continues to violate one of the key principles of Internet use today - the ability to stay anonymous and operate freely without retribution. In this regard, we have successfully reconciled these mutually divergent requirements to make it not only economically feasible and politically viable but also socially acceptable. This work opens up several directions for future research - analysis of existing attack attribution techniques to identify further scope for improvements, incorporation of newer attributes into the design framework of the composable data model abstraction, and finally design of newer attack attribution techniques that comprehensively integrate the various attack prevention, mitigation and traceback techniques in an efficient manner

    Towards Loop-Free Forwarding of Anonymous Internet Datagrams that Enforce Provenance

    Full text link
    The way in which addressing and forwarding are implemented in the Internet constitutes one of its biggest privacy and security challenges. The fact that source addresses in Internet datagrams cannot be trusted makes the IP Internet inherently vulnerable to DoS and DDoS attacks. The Internet forwarding plane is open to attacks to the privacy of datagram sources, because source addresses in Internet datagrams have global scope. The fact an Internet datagrams are forwarded based solely on the destination addresses stated in datagram headers and the next hops stored in the forwarding information bases (FIB) of relaying routers allows Internet datagrams to traverse loops, which wastes resources and leaves the Internet open to further attacks. We introduce PEAR (Provenance Enforcement through Addressing and Routing), a new approach for addressing and forwarding of Internet datagrams that enables anonymous forwarding of Internet datagrams, eliminates many of the existing DDoS attacks on the IP Internet, and prevents Internet datagrams from looping, even in the presence of routing-table loops.Comment: Proceedings of IEEE Globecom 2016, 4-8 December 2016, Washington, D.C., US

    Do Inspection and Traceability Provide Incentives for Food Safety?

    Get PDF
    One of the goals of inspection and traceability is to motivate suppliers to deliver safer food. The ability of these policies to motivate suppliers depends on the accuracy of the inspection, the cost of failing inspection, the cost of causing a foodborne illness, and the proportion of these costs paid by the supplier. We develop a model of the supplier's expected cost as a function of inspection accuracy, the cost of failure, and the proportion of the failure cost that is allocated to suppliers. The model is used to identify the conditions under which the supplier is motivated to deliver uncontaminated lots. Surprisingly, our results show that when safety failure costs can be allocated to suppliers, minimum levels of inspection error are required to motivate a supplier to deliver uncontaminated lots. This result does not hold when costs cannot be allocated to suppliers. As a case study, we use our results to analyze the technical requirements for suppliers of frozen beef to the USDA's Agricultural Marketing Service.diagnostic error, food safety, inspection, sampling error, traceability, Food Consumption/Nutrition/Food Safety,

    TRACEABILITY IN THE U.S. FOOD SUPPLY: ECONOMIC THEORY AND INDUSTRY STUDIES

    Get PDF
    This investigation into the traceability baseline in the United States finds that private sector food firms have developed a substantial capacity to trace. Traceability systems are a tool to help firms manage the flow of inputs and products to improve efficiency, product differentiation, food safety, and product quality. Firms balance the private costs and benefits of traceability to determine the efficient level of traceability. In cases of market failure, where the private sector supply of traceability is not socially optimal, the private sector has developed a number of mechanisms to correct the problem, including contracting, third-party safety/quality audits, and industry-maintained standards. The best-targeted government policies for strengthening firms' incentives to invest in traceability are aimed at ensuring that unsafe of falsely advertised foods are quickly removed from the system, while allowing firms the flexibility to determine the manner. Possible policy tools include timed recall standards, increased penalties for distribution of unsafe foods, and increased foodborne-illness surveillance.traceability, tracking, traceback, tracing, recall, supply-side management, food safety, product differentiation, Food Consumption/Nutrition/Food Safety, Industrial Organization,
    • …
    corecore