1,019 research outputs found

    TechNews digests: Jan - Nov 2009

    Get PDF
    TechNews is a technology, news and analysis service aimed at anyone in the education sector keen to stay informed about technology developments, trends and issues. TechNews focuses on emerging technologies and other technology news. TechNews service : digests september 2004 till May 2010 Analysis pieces and News combined publish every 2 to 3 month

    Embedded Firmware Solutions

    Get PDF
    Computer scienc

    Routing on the Channel Dependency Graph:: A New Approach to Deadlock-Free, Destination-Based, High-Performance Routing for Lossless Interconnection Networks

    Get PDF
    In the pursuit for ever-increasing compute power, and with Moore's law slowly coming to an end, high-performance computing started to scale-out to larger systems. Alongside the increasing system size, the interconnection network is growing to accommodate and connect tens of thousands of compute nodes. These networks have a large influence on total cost, application performance, energy consumption, and overall system efficiency of the supercomputer. Unfortunately, state-of-the-art routing algorithms, which define the packet paths through the network, do not utilize this important resource efficiently. Topology-aware routing algorithms become increasingly inapplicable, due to irregular topologies, which either are irregular by design, or most often a result of hardware failures. Exchanging faulty network components potentially requires whole system downtime further increasing the cost of the failure. This management approach becomes more and more impractical due to the scale of today's networks and the accompanying steady decrease of the mean time between failures. Alternative methods of operating and maintaining these high-performance interconnects, both in terms of hardware- and software-management, are necessary to mitigate negative effects experienced by scientific applications executed on the supercomputer. However, existing topology-agnostic routing algorithms either suffer from poor load balancing or are not bounded in the number of virtual channels needed to resolve deadlocks in the routing tables. Using the fail-in-place strategy, a well-established method for storage systems to repair only critical component failures, is a feasible solution for current and future HPC interconnects as well as other large-scale installations such as data center networks. Although, an appropriate combination of topology and routing algorithm is required to minimize the throughput degradation for the entire system. This thesis contributes a network simulation toolchain to facilitate the process of finding a suitable combination, either during system design or while it is in operation. On top of this foundation, a key contribution is a novel scheduling-aware routing, which reduces fault-induced throughput degradation while improving overall network utilization. The scheduling-aware routing performs frequent property preserving routing updates to optimize the path balancing for simultaneously running batch jobs. The increased deployment of lossless interconnection networks, in conjunction with fail-in-place modes of operation and topology-agnostic, scheduling-aware routing algorithms, necessitates new solutions to solve the routing-deadlock problem. Therefore, this thesis further advances the state-of-the-art by introducing a novel concept of routing on the channel dependency graph, which allows the design of an universally applicable destination-based routing capable of optimizing the path balancing without exceeding a given number of virtual channels, which are a common hardware limitation. This disruptive innovation enables implicit deadlock-avoidance during path calculation, instead of solving both problems separately as all previous solutions

    Towards understanding and mitigating attacks leveraging zero-day exploits

    Get PDF
    Zero-day vulnerabilities are unknown and therefore not addressed with the result that they can be exploited by attackers to gain unauthorised system access. In order to understand and mitigate against attacks leveraging zero-days or unknown techniques, it is necessary to study the vulnerabilities, exploits and attacks that make use of them. In recent years there have been a number of leaks publishing such attacks using various methods to exploit vulnerabilities. This research seeks to understand what types of vulnerabilities exist, why and how these are exploited, and how to defend against such attacks by either mitigating the vulnerabilities or the method / process of exploiting them. By moving beyond merely remedying the vulnerabilities to defences that are able to prevent or detect the actions taken by attackers, the security of the information system will be better positioned to deal with future unknown threats. An interesting finding is how attackers exploit moving beyond the observable bounds to circumvent security defences, for example, compromising syslog servers, or going down to lower system rings to gain access. However, defenders can counter this by employing defences that are external to the system preventing attackers from disabling them or removing collected evidence after gaining system access. Attackers are able to defeat air-gaps via the leakage of electromagnetic radiation as well as misdirect attribution by planting false artefacts for forensic analysis and attacking from third party information systems. They analyse the methods of other attackers to learn new techniques. An example of this is the Umbrage project whereby malware is analysed to decide whether it should be implemented as a proof of concept. Another important finding is that attackers respect defence mechanisms such as: remote syslog (e.g. firewall), core dump files, database auditing, and Tripwire (e.g. SlyHeretic). These defences all have the potential to result in the attacker being discovered. Attackers must either negate the defence mechanism or find unprotected targets. Defenders can use technologies such as encryption to defend against interception and man-in-the-middle attacks. They can also employ honeytokens and honeypots to alarm misdirect, slow down and learn from attackers. By employing various tactics defenders are able to increase their chance of detecting and time to react to attacks, even those exploiting hitherto unknown vulnerabilities. To summarize the information presented in this thesis and to show the practical importance thereof, an examination is presented of the NSA's network intrusion of the SWIFT organisation. It shows that the firewalls were exploited with remote code execution zerodays. This attack has a striking parallel in the approach used in the recent VPNFilter malware. If nothing else, the leaks provide information to other actors on how to attack and what to avoid. However, by studying state actors, we can gain insight into what other actors with fewer resources can do in the future

    Study of Various Motherboards

    Get PDF
    Not availabl

    Review Pages: Methods, Tools and Best practices to Increase the Capacity of Urban Systems to Adapt to Natural and Man-made Changes

    Get PDF
    Starting from the relationship between urban planning and mobility management, TeMA has gradually expanded the view of the covered topics, always remaining in the groove of rigorous scientific in-depth analysis. During the last two years a particular attention has been paid on the Smart Cities theme and on the different meanings that come with it. The last section of the journal is formed by the Review Pages. They have different aims: to inform on the problems, trends and evolutionary processes; to investigate on the paths by highlighting the advanced relationships among apparently distant disciplinary fields; to explore the interaction’s areas, experiences and potential applications; to underline interactions, disciplinary developments but also, if present, defeats and setbacks. Inside the journal the Review Pages have the task of stimulating as much as possible the circulation of ideas and the discovery of new points of view. For this reason the section is founded on a series of basic’s references, required for the identification of new and more advanced interactions. These references are the research, the planning acts, the actions and the applications, analysed and investigated both for their ability to give a systematic response to questions concerning the urban and territorial planning, and for their attention to aspects such as the environmental sustainability and the innovation in the practices. For this purpose the Review Pages are formed by five sections (Web Resources; Books; Laws; Urban Practices; News and Events), each of which examines a specific aspect of the broader information storage of interest for TeMA

    Text-Based Network Industries and Endogenous Product Differentiation

    Get PDF
    We study how firms differ from their competitors using new time-varying measures of product similarity based on text-based analysis of firm 10-K product descriptions. This year-by-year set of product similarity measures allows us to generate a new set of industries in which firms can have their own distinct set of competitors. Our new sets of competitors explain specific discussion of high competition, rivals identified by managers as peer firms, and changes to industry competitors following exogenous industry shocks. We also find evidence that firm R&D and advertising are associated with subsequent differentiation from competitors, consistent with theories of endogenous product differentiation

    CATTmew: Defeating Software-only Physical Kernel Isolation

    Full text link
    All the state-of-the-art rowhammer attacks can break the MMU-enforced inter-domain isolation because the physical memory owned by each domain is adjacent to each other. To mitigate these attacks, physical domain isolation, introduced by CATT, physically separates each domain by dividing the physical memory into multiple partitions and keeping each partition occupied by only one domain. CATT implemented physical kernel isolation as the first generic and practical software-only defense to protect kernel from being rowhammered as kernel is one of the most appealing targets. In this paper, we develop a novel exploit that could effectively defeat the physical kernel isolation and gain both root and kernel privileges. Our exploit can work without exhausting the page cache or the system memory, or relying on the information of the virtual-to-physical address mapping. The exploit is motivated by our key observation that the modern OSes have double-owned kernel buffers (e.g., video buffers and SCSI Generic buffers) owned concurrently by the kernel and user domains. The existence of such buffers invalidates the physical kernel isolation and makes the rowhammer-based attack possible again. Existing conspicuous rowhammer attacks achieving the root/kernel privilege escalation exhaust the page cache or even the whole system memory. Instead, we propose a new technique, named memory ambush. It is able to place the hammerable double-owned kernel buffers physically adjacent to the target objects (e.g., page tables) with only a small amount of memory. As a result, our exploit is stealthier and has fewer memory footprints. We also replace the inefficient rowhammer algorithm that blindly picks up addresses to hammer with an efficient one. Our algorithm selects suitable addresses based on an existing timing channel.Comment: Preprint of the work accepted at the IEEE Transactions on Dependable and Secure Computing 201

    Emerging technologies for reef fisheries research and management [held during the 56th annual Gulf and Caribbean Fisheries Institute meeting in Tortola, British Virgin Islands, November 2003]

    Get PDF
    This publication of the NOAA Professional Paper NMFS Series is the product of a special symposium on “Emerging Technologies for Reef Fisheries Research and Management” held during the 56th annual Gulf and Caribbean Fisheries Institute meeting in Tortola, British Virgin Islands, November 2003. The purpose of this collection is to highlight the diversity of questions and issues in reef fisheries management that are benefiting from applications of technology. Topics cover a wide variety of questions and issues from the study of individual behavior, distribution and abundance of groups and populations, and associations between habitats and fish and shellfish species.(PDF files contains 124 pages.
    • …
    corecore