1,461 research outputs found

    Comparison of the MPP with other supercomputers for LANDSAT data processing

    Get PDF
    The massively parallel processor is compared to the CRAY X-MP and the CYBER-205 for LANDSAT data processing. The maximum likelihood classification algorithm is the basis for comparison since this algorithm is simple to implement and vectorizes very well. The algorithm was implemented on all three machines and tested by classifying the same full scene of LANDSAT multispectral scan data. Timings are compared as well as features of the machines and available software

    Demotivators as deprecating and phatic multimodal communicative acts

    Get PDF
    The aim of this chapter is to show how an analytical framework that combines multimodal, semantic and pragmatic analysis can account for the way in which demotivators function as communicative acts. A demotivator is a conjunction of an eloquent picture or photo with a caption which comments on its content, which usually produces an ironical effect. In what follows I address the origin and definition of demotivators, the evolution of their socio-communicative function, the categorisation of demotivators by area of focus, and the relation between the linguistic and the visual components of demotivators. The empirical material includes demotivators created by members of the Polish- , English- and Russian-speaking communities. A corpus of over 1,000 items has been gathered. The items were retrieved from the following websites: http://demotywatory.pl/, www.demotivators.ru, www.demotivers.com, and the social networking service Facebook. The study provides evidence that demotivators should be treated as a discrete category of units which have the potential to develop in pragmatic and multimodal directions

    Saunter- reveries on time

    Get PDF

    Hardening High-Assurance Security Systems with Trusted Computing

    Get PDF
    We are living in the time of the digital revolution in which the world we know changes beyond recognition every decade. The positive aspect is that these changes also drive the progress in quality and availability of digital assets crucial for our societies. To name a few examples, these are broadly available communication channels allowing quick exchange of knowledge over long distances, systems controlling automatic share and distribution of renewable energy in international power grid networks, easily accessible applications for early disease detection enabling self-examination without burdening the health service, or governmental systems assisting citizens to settle official matters without leaving their homes. Unfortunately, however, digitalization also opens opportunities for malicious actors to threaten our societies if they gain control over these assets after successfully exploiting vulnerabilities in the complex computing systems building them. Protecting these systems, which are called high-assurance security systems, is therefore of utmost importance. For decades, humanity has struggled to find methods to protect high-assurance security systems. The advancements in the computing systems security domain led to the popularization of hardware-assisted security techniques, nowadays available in commodity computers, that opened perspectives for building more sophisticated defense mechanisms at lower costs. However, none of these techniques is a silver bullet. Each one targets particular use cases, suffers from limitations, and is vulnerable to specific attacks. I argue that some of these techniques are synergistic and help overcome limitations and mitigate specific attacks when used together. My reasoning is supported by regulations that legally bind high-assurance security systems' owners to provide strong security guarantees. These requirements can be fulfilled with the help of diverse technologies that have been standardized in the last years. In this thesis, I introduce new techniques for hardening high-assurance security systems that execute in remote execution environments, such as public and hybrid clouds. I implemented these techniques as part of a framework that provides technical assurance that high-assurance security systems execute in a specific data center, on top of a trustworthy operating system, in a virtual machine controlled by a trustworthy hypervisor or in strong isolation from other software. I demonstrated the practicality of my approach by leveraging the framework to harden real-world applications, such as machine learning applications in the eHealth domain. The evaluation shows that the framework is practical. It induces low performance overhead (<6%), supports software updates, requires no changes to the legacy application's source code, and can be tailored to individual trust boundaries with the help of security policies. The framework consists of a decentralized monitoring system that offers better scalability than traditional centralized monitoring systems. Each monitored machine runs a piece of code that verifies that the machine's integrity and geolocation conform to the given security policy. This piece of code, which serves as a trusted anchor on that machine, executes inside the trusted execution environment, i.e., Intel SGX, to protect itself from the untrusted host, and uses trusted computing techniques, such as trusted platform module, secure boot, and integrity measurement architecture, to attest to the load-time and runtime integrity of the surrounding operating system running on a bare metal machine or inside a virtual machine. The trusted anchor implements my novel, formally proven protocol, enabling detection of the TPM cuckoo attack. The framework also implements a key distribution protocol that, depending on the individual security requirements, shares cryptographic keys only with high-assurance security systems executing in the predefined security settings, i.e., inside the trusted execution environments or inside the integrity-enforced operating system. Such an approach is particularly appealing in the context of machine learning systems where some algorithms, like the machine learning model training, require temporal access to large computing power. These algorithms can execute inside a dedicated, trusted data center at higher performance because they are not limited by security features required in the shared execution environment. The evaluation of the framework showed that training of a machine learning model using real-world datasets achieved 0.96x native performance execution on the GPU and a speedup of up to 1560x compared to the state-of-the-art SGX-based system. Finally, I tackled the problem of software updates, which makes the operating system's integrity monitoring unreliable due to false positives, i.e., software updates move the updated system to an unknown (untrusted) state that is reported as an integrity violation. I solved this problem by introducing a proxy to a software repository that sanitizes software packages so that they can be safely installed. The sanitization consists of predicting and certifying the future (after the specific updates are installed) operating system's state. The evaluation of this approach showed that it supports 99.76% of the packages available in Alpine Linux main and community repositories. The framework proposed in this thesis is a step forward in verifying and enforcing that high-assurance security systems execute in an environment compliant with regulations. I anticipate that the framework might be further integrated with industry-standard security information and event management tools as well as other security monitoring mechanisms to provide a comprehensive solution hardening high-assurance security systems

    Wpływ bezpośrednich inwestycji zagranicznych na dysproporcje międzyregionalne w Polsce

    Get PDF
    Celem opracowania jest przedstawienie wybranych teorii dotyczących lokowania bezpośrednich inwestycji zagranicznych, zobrazowanie znaczenia bezpośrednich inwestycji zagranicznych dla gospodarki regionu, jak i zilustrowanie wpływu tego typu inwestycji na dysproporcje międzyregionalne na przykładzie Polski. Bezpośrednie inwestycje zagraniczne są siłą motoryczną rozwoju regionów, gdyż wprowadzają nowoczesne technologie i tworzą nowe miejsca pracy. Ten typ inwestycji spowodować może również wzrost wydajności oraz produktywności gospodarki. Kapitał zagraniczny, bardzo ważny dla rozwoju regionu, omija jednak regiony, gdzie występuje słaba infrastruktura oraz wysokie bezrobocie, co pogłębia istniejące dysproporcje. Regiony o wysokim poziomie bezpośrednich inwestycji zagranicznych charakteryzują się niższym poziomem bezrobocia, wyższym przeciętnym wynagrodzeniem i przede wszystkim wyższym poziomem produktu krajowego brutto na 1 mieszkańca

    Diffuse idiopathic skeletal hyperostosis in a late nineteenth early twenitieth century almshouse cemetery

    Get PDF
    Diffuse idiopathic skeletal hyperostosis (DISH) is a rheumatology term for a particular type of vertebral arthritis involving the calcification of the right aspect of the anterolateral ligament (ALL) and the presence of ligament ossification at particular peripheral joints. DISH is most common among middle to late age males and is thought to be present in 10% of males over the age of 65. Although the etiology of the disease is unknown, many have associated it with diabetes and a high status lifestyle. In this thesis, DISH is examined in a late nineteenth, early twentieth century almshouse cemetery known as the Milwaukee County Institution Grounds (MCIG) cemetery. Due to the health and diet of the immigrant peoples living in Milwaukee during the MCIG cemetery’s usage, 1850 to 1974, it is suspected that diabetes would not have been a common disorder, thus leading to little to no DISH presence in the cemetery population. However, DISH is seen in the MCIG population which suggests that the etiology of DISH is not a result of diet or diabetes but other factors altogether.Department of AnthropologyThesis (M.A.

    Governing through Learning:School Self-Evaluation as a Knowledge-based Regulatory Tool

    Get PDF
    This paper discusses knowledge-based regulation tools (KBRTs) as new forms of regulation through an exploration of school self-evaluation (SSE) in Scot­land. We conceptualise self-evaluation as a hybrid regulatory instrument, combining data-based knowledge with knowledges “performed” by institutions and individuals to order to demonstrate their progress on the “journey to excellen­ce” in learning (HMIe, 2009) that is expected of schools, teachers and learners in Scotland. We see the development of self-evaluation in Scotland and more widely as arising from earlier over-reliance on data and from the proliferation of information that together combine to produce the problem of “evidence” as a governing technology. Data require continuous and demanding work – including interpretive work – if they are to be effective. SSE, we suggest, offers a combination of data-based knowledge with professional expertise and individual responsibility, that enables the governing and shaping of the school as a “learning organisation” and, in the context of Scotland on which this paper pri­marily focuses, reflects the presentation of governing as learning activity, in which pupils, teachers, local authorities and government itself are collectively engaged

    Statistical characteristics of the damped vibrations of a string excited by stochastic forces

    Get PDF
    Our theoretical study aims at finding some statistic parameters characterizing the damped vibrations of a string excited by stochastic impulses. We derive the dependence of these parameters on the parameters of the string as well as on the stochastic distributions of the impulse magnitude, and on the place of the action. We also carry out a numerical simulation verifying the derived mathematical model and interpret the differences between the results obtained in simulation and the mathematical calculations. This study is the fourth stage of a research aimed at designing a probe that facilitates the process of measuring of the parameters, determining the quality of a technological process

    Probability Discounting of Lewis and Fischer 344 rats: Strain Comparisons at Baseline and Following Acute Administration of d-Amphetamine

    Get PDF
    Risky choice can be defined as choice for a larger, uncertain reinforcer over a smaller, certain reinforcer when choosing the smaller alternative maximizes reinforcement. Risky choice is studied using various procedures in the animal laboratory; one such procedure is called probability discounting. There are many variables that contribute to risky decision-making, including biological and pharmacological determinants. The present study assessed both of these variables by evaluating dose-response effects of d-amphetamine on risky choice of Lewis (LEW) and Fischer 344 (F344) rats. The probability-discounting procedure included discrete-trials choices between one food pellet delivered 100% of the time and three food pellets delivered following one of varying probabilities. The probability of three food pellets being delivered decreased systematically across blocks within each session. At baseline, risky choice did not differ between LEW and F344. However, choice for LEW became significantly less risky throughout extended training while choice for F344 remained relatively stable over time. d-Amphetamine significantly increased risky choice for both rat strains at low-to-moderate doses (0.1 and 0.3 mg/kg), although it did so at a lower dose for F344 (0.1 and 0.3 mg/kg) than LEW (0.3 mg/kg only), suggesting greater behavioral sensitivity to effects of d-amphetamine for F344. High doses of d-amphetamine (1.0 and 1.8 mg/kg) produced overall disruptions in choice for both strains, indicated by reductions in choice for the larger, uncertain alternative when the probability of delivery was relatively high and increases when the probability was relatively low. Results from the current study stand in contrast to previous reports investigating impulsive choice (i.e., choice involving temporal delays rather than uncertainty) of LEW and F344. Thus, the present work underscores the importance of considering risky and impulsive choice as two separate, but related, behavioral processes

    Determining the distribution of values of stochastic impulses acting on a discrete system in relation to their intensity

    Get PDF
    In our previous works we introduced and applied a mathematical model that allowed us to calculate the approximate distribution of the values of stochastic impulses ηiη_{i} forcing vibrations of an oscillator with damping from the trajectory of its movement. The mathematical model describes correctly the functioning of a physical RLC system if the coefficient of damping is large and the intensity λ of impulses is small. It is so because the inflow of energy is small and behaviour of RLC is stable. In this paper we are going to present some experiments which characterize the behaviour of an oscillator RLC in relation to the intensity parameter λ, precisely to λ E(η). The parameter λ is a constant in the exponential distribution of random variables τiτ_{i}, where τi=titi1τ_{i} = t_{i} - t_{i - 1}, i = 1, 2, ... are intervals between successive impulses
    corecore