462 research outputs found

    TRACEABILITY IN THE U.S. FOOD SUPPLY: ECONOMIC THEORY AND INDUSTRY STUDIES

    Get PDF
    This investigation into the traceability baseline in the United States finds that private sector food firms have developed a substantial capacity to trace. Traceability systems are a tool to help firms manage the flow of inputs and products to improve efficiency, product differentiation, food safety, and product quality. Firms balance the private costs and benefits of traceability to determine the efficient level of traceability. In cases of market failure, where the private sector supply of traceability is not socially optimal, the private sector has developed a number of mechanisms to correct the problem, including contracting, third-party safety/quality audits, and industry-maintained standards. The best-targeted government policies for strengthening firms' incentives to invest in traceability are aimed at ensuring that unsafe of falsely advertised foods are quickly removed from the system, while allowing firms the flexibility to determine the manner. Possible policy tools include timed recall standards, increased penalties for distribution of unsafe foods, and increased foodborne-illness surveillance.traceability, tracking, traceback, tracing, recall, supply-side management, food safety, product differentiation, Food Consumption/Nutrition/Food Safety, Industrial Organization,

    A multi-disciplinary framework for cyber attribution

    Get PDF
    Effective Cyber security is critical to the prosperity of any nation in the modern world. We have become dependant upon this interconnected network of systems for a number of critical functions within society. As our reliance upon this technology has increased, as has the prospective gains for malicious actors who would abuse these systems for their own personal benefit, at the cost of legitimate users. The result has been an explosion of cyber attacks, or cyber enabled crimes. The threat from hackers, organised criminals and even nations states is ever increasing. One of the critical enablers to our cyber security is that of cyber attribution, the ability to tell who is acting against our systems. A purely technical approach to cyber attribution has been found to be ineffective in the majority of cases, taking too narrow approach to the attribution problem. A purely technical approach will provide Indicators Of Compromise (IOC) which is suitable for the immediate recovery and clean up of a cyber event. It fails however to ask the deeper questions of the origin of the attack. This can be derived from a wider set of analysis and additional sources of data. Unfortunately due to the wide range of data types and highly specialist skills required to perform the deep level analysis there is currently no common framework for analysts to work together towards resolving the attribution problem. This is further exasperated by a communication barrier between the highly specialised fields and no obviously compatible data types. The aim of the project is to develop a common framework upon which experts from a number of disciplines can add to the overall attribution picture. These experts will add their input in the form of a library. Firstly a process was developed to enable the creation of compatible libraries in different specialist fields. A series of libraries can be used by an analyst to create an overarching attribution picture. The framework will highlight any intelligence gaps and additionally an analyst can use the list of libraries to suggest a tool or method to fill that intelligence gap. By the end of the project a working framework had been developed with a number of libraries from a wide range of technical attribution disciplines. These libraries were used to feed in real time intelligence to both technical and nontechnical analysts who were then able to use this information to perform in depth attribution analysis. The pictorial format of the framework was found to assist in the breaking down of the communication barrier between disciplines and was suitable as an intelligence product in its own right, providing a useful visual aid to briefings. The simplicity of the library based system meant that the process was easy to learn with only a short introduction to the framework required

    Traceability -- A Literature Review

    Get PDF
    In light of recent food safety crises and international trade concerns associated with food or animal associated diseases, traceability has once again become important in the minds of public policymakers, business decision makers, consumers and special interest groups. This study reviews studies on traceability, government regulation and consumer behaviour, provide case studies of current traceability systems and a rough breakdown of various costs and benefits of traceability. This report aims to identify gaps that may currently exist in the literature on traceability in the domestic beef supply chain, as well as provide possible directions for future research into said issue. Three main conclusions can be drawn from this study. First, there is a lack of a common definition of traceability. Hence identifying similarities and differences across studies becomes difficult if not impossible. To this end, this study adopts CFIA’s definition of traceability. This definition has been adopted by numerous other agencies including the EU’s official definition of traceability however it may or may not be acceptable from the perspective of major Canadian beef and cattle trade partners. Second, the studies reviewed in this report address one or more of five key objectives; the impact of changing consumer behaviour on market participants, suppliers incentive to adopt or participate in traceability, impact of regulatory changes, supplier response to crisis and technical description of traceability systems. Drawing from the insights from the consumer studies, it seems as if consumers do not value traceability per se, traceability is a means for consumers to receive validation of another production or process attribute that they are interested in. Moreover, supply chain improvement, food safety control and accessing foreign market segments are strong incentives for primary producers and processors to participate in programs with traceability features. However the objectives addressed by the studies reviewed in this paper are not necessarily the objectives that are of most immediate relevance to decision makers about appropriate traceability standards to recommend, require, subsidize etc. In many cases the research objectives of previous work have been extremely narrow creating a body of literature that is incomplete in certain key areas. Third, case studies of existing traceability systems in Australia, the UK, Scotland, Brazil and Uruguay indicate that the pattern of development varies widely across sectors and regions. In summary, a traceability system by itself cannot provide value-added for all participants in the industry; it is merely a protocol for documenting and sharing information. Value is added to participants in the marketing chain through traceability in the form of reduced transactions costs in the case of a food safety incident and through the ability to shift liability. To ensure consumer benefit and have premiums returned to primary producers the type of information that consumers value is an important issue for future research. A successful program that peaks consumer interest and can enhance their eating experience can generate economic benefits to all sectors in the beef industry. International market access will increasingly require traceability in the marketing system in order to satisfy trade restrictions in the case of animal diseases and country of origin labelling, to name only a few examples. Designing appropriate traceability protocols industry wide is therefore becoming very important.traceability, institutions, Canada, consumer behaviour, producer behaviour, supply chain, Agricultural and Food Policy, Consumer/Household Economics, Food Consumption/Nutrition/Food Safety, Health Economics and Policy, International Relations/Trade, Livestock Production/Industries, Marketing, Production Economics, D020, D100, D200, Q100,

    Adaptive Response System for Distributed Denial-of-Service Attacks

    No full text
    The continued prevalence and severe damaging effects of the Distributed Denial of Service (DDoS) attacks in today’s Internet raise growing security concerns and call for an immediate response to come up with better solutions to tackle DDoS attacks. The current DDoS prevention mechanisms are usually inflexible and determined attackers with knowledge of these mechanisms, could work around them. Most existing detection and response mechanisms are standalone systems which do not rely on adaptive updates to mitigate attacks. As different responses vary in their “leniency” in treating detected attack traffic, there is a need for an Adaptive Response System. We designed and implemented our DDoS Adaptive ResponsE (DARE) System, which is a distributed DDoS mitigation system capable of executing appropriate detection and mitigation responses automatically and adaptively according to the attacks. It supports easy integrations for both signature-based and anomaly-based detection modules. Additionally, the design of DARE’s individual components takes into consideration the strengths and weaknesses of existing defence mechanisms, and the characteristics and possible future mutations of DDoS attacks. These components consist of an Enhanced TCP SYN Attack Detector and Bloom-based Filter, a DDoS Flooding Attack Detector and Flow Identifier, and a Non Intrusive IP Traceback mechanism. The components work together interactively to adapt the detections and responses in accordance to the attack types. Experiments conducted on DARE show that the attack detection and mitigation are successfully completed within seconds, with about 60% to 86% of the attack traffic being dropped, while availability for legitimate and new legitimate requests is maintained. DARE is able to detect and trigger appropriate responses in accordance to the attacks being launched with high accuracy, effectiveness and efficiency. We also designed and implemented a Traffic Redirection Attack Protection System (TRAPS), a stand-alone DDoS attack detection and mitigation system for IPv6 networks. In TRAPS, the victim under attack verifies the authenticity of the source by performing virtual relocations to differentiate the legitimate traffic from the attack traffic. TRAPS requires minimal deployment effort and does not require modifications to the Internet infrastructure due to its incorporation of the Mobile IPv6 protocol. Experiments to test the feasibility of TRAPS were carried out in a testbed environment to verify that it would work with the existing Mobile IPv6 implementation. It was observed that the operations of each module were functioning correctly and TRAPS was able to successfully mitigate an attack launched with spoofed source IP addresses

    Scalable schemes against Distributed Denial of Service attacks

    Get PDF
    Defense against Distributed Denial of Service (DDoS) attacks is one of the primary concerns on the Internet today. DDoS attacks are difficult to prevent because of the open, interconnected nature of the Internet and its underlying protocols, which can be used in several ways to deny service. Attackers hide their identity by using third parties such as private chat channels on IRC (Internet Relay Chat). They also insert false return IP address, spoofing, in a packet which makes it difficult for the victim to determine the packet\u27s origin. We propose three novel and realistic traceback mechanisms which offer many advantages over the existing schemes. All the three schemes take advantage of the Autonomous System topology and consider the fact that the attacker\u27s packets may traverse through a number of domains under different administrative control. Most of the traceback mechanisms make wrong assumptions that the network details of a company under an administrative control are disclosed to the public. For security reasons, this is not the case most of the times. The proposed schemes overcome this drawback by considering reconstruction at the inter and intra AS levels. Hierarchical Internet Traceback (HIT) and Simple Traceback Mechanism (STM) trace back to an attacker in two phases. In the first phase the attack originating Autonomous System is identified while in the second phase the attacker within an AS is identified. Both the schemes, HIT and STM, allow the victim to trace back to the attackers in a few seconds. Their computational overhead is very low and they scale to large distributed attacks with thousands of attackers. Fast Autonomous System Traceback allows complete attack path reconstruction with few packets. We use traceroute maps of real Internet topologies CAIDA\u27s skitter to simulate DDoS attacks and validate our design

    On packet marking and Markov modeling for IP Traceback: A deep probabilistic and stochastic analysis

    Get PDF
    From many years, the methods to defend against Denial of Service attacks have been very attractive from different point of views, although network security is a large and very complex topic. Different techniques have been proposed and so-called packet marking and IP tracing procedures have especially demonstrated a good capacity to face different malicious attacks. While host-based DoS attacks are more easily traced and managed, network-based DoS attacks are a more challenging threat. In this paper, we discuss a powerful aspect of the IP traceback method, which allows a router to mark and add information to attack packets on the basis of a fixed probability value. We propose a potential method for modeling the classic probabilistic packet marking algorithm as Markov chains, allowing a closed form to be obtained for evaluating the correct number of received marked packets in order to build a meaningful attack graph and analyze how marking routers must behave to minimize the overall overhead

    Message traceback systems dancing with the devil

    Get PDF
    The research community has produced a great deal of work in recent years in the areas of IP, layer 2 and connection-chain traceback. We collectively designate these as message traceback systems which, invariably aim to locate the origin of network data, in spite of any alterations effected to that data (whether legitimately or fraudulently). This thesis provides a unifying definition of spoofing and a classification based on this which aims to encompass all streams of message traceback research. The feasibility of this classification is established through its application to our literature review of the numerous known message traceback systems. We propose two layer 2 (L2) traceback systems, switch-SPIE and COTraSE, which adopt different approaches to logging based L2 traceback for switched ethernet. Whilst message traceback in spite of spoofing is interesting and perhaps more challenging than at first seems, one might say that it is rather academic. Logging of network data is a controversial and unpopular notion and network administrators don't want the added installation and maintenance costs. However, European Parliament Directive 2006/24/EC requires that providers of publicly available electronic communications networks retain data in a form similar to mobile telephony call records, from April 2009 and for periods of up to 2 years. This thesis identifies the relevance of work in all areas of message traceback to the European data retention legislation. In the final part of this thesis we apply our experiences with L2 traceback, together with our definitions and classification of spoofing to discuss the issues that EU data retention implementations should consider. It is possible to 'do logging right' and even safeguard user privacy. However this can only occur if we fully understand the technical challenges, requiring much further work in all areas of logging based, message traceback systems. We have no choice but to dance with the devil.EThOS - Electronic Theses Online ServiceGBUnited Kingdo
    corecore