249 research outputs found
Second-Person Surveillance: Politics of User Implication in Digital Documentaries
This dissertation analyzes digital documentaries that utilize second-person address and roleplay to make users feel implicated in contemporary refugee crises, mass incarceration in the U.S., and state and corporate surveillances. Digital documentaries are seemingly more interactive and participatory than linear film and video documentary as they are comprised of a variety of auditory, visual, and written media, utilize networked technologies, and turn the documentary audience into a documentary user. I draw on scholarship from documentary, game, new media, and surveillance studies to analyze how second-person address in digital documentaries is configured through user positioning and direct address within the works themselves, in how organizations and creators frame their productions, and in how users and players respond in reviews, discussion forums, and Let’s Plays. I build on Michael Rothberg’s theorization of the implicated subject to explore how these digital documentaries bring the user into complicated relationality with national and international crises. Visually and experientially implying that users bear responsibility to the subjects and subject matter, these works can, on the one hand, replicate modes of liberal empathy for suffering, distant “others” and, on the other, simulate one’s own surveillant modes of observation or behavior to mirror it back to users and open up one’s offline thoughts and actions as a site of critique.
This dissertation charts how second-person address shapes and limits the political potentialities of documentary projects and connects them to a lineage of direct address from educational and propaganda films, museum exhibits, and serious games. By centralizing the user’s individual experience, the interventions that second-person digital documentaries can make into social discourse change from public, institution-based education to more privatized forms of sentimental education geared toward personal edification and self-realization. Unless tied to larger initiatives or movements, I argue that digital documentaries reaffirm a neoliberal politics of individual self-regulation and governance instead of public education or collective, social intervention.
Chapter one focuses on 360-degree virtual reality (VR) documentaries that utilize the feeling of presence to position users as if among refugees and as witnesses to refugee experiences in camps outside of Europe and various dwellings in European cities. My analysis of Clouds Over Sidra (Gabo Arora and Chris Milk 2015) and The Displaced (Imraan Ismail and Ben C. Solomon 2015) shows how these VR documentaries utilize observational realism to make believable and immersive their representations of already empathetic refugees. The empathetic refugee is often young, vulnerable, depoliticized and dehistoricized and is a well-known trope in other forms of humanitarian media that continues into VR documentaries. Forced to Flee (Zahra Rasool 2017), I am Rohingya (Zahra Rasool 2017), So Leben Flüchtlinge in Berlin (Berliner Morgenpost 2017), and Limbo: A Virtual Experience of Waiting for Asylum (Shehani Fernando 2017) disrupt easy immersions into realistic-looking VR experiences of stereotyped representations and user identifications and, instead, can reflect back the user’s political inaction and surveillant modes of looking.
Chapter two analyzes web- and social media messenger-based documentaries that position users as outsiders to U.S. mass incarceration. Users are noir-style co-investigators into the crime of the prison-industrial complex in Fremont County, Colorado in Prison Valley: The Prison Industry (David Dufresne and Philippe Brault 2009) and co-riders on a bus transporting prison inmates’ loved ones for visitations to correctional facilities in Upstate New York in A Temporary Contact (Nirit Peled and Sara Kolster 2017). Both projects construct an experience of carceral constraint for users to reinscribe seeming “outside” places, people, and experiences as within the continuation of the racialized and classed politics of state control through mass incarceration. These projects utilize interfaces that create a tension between replicating an exploitative hierarchy between non-incarcerated users and those subject to mass incarceration while also de-immersing users in these experiences to mirror back the user’s supposed distance from this mode of state regulation.
Chapter three investigates a type of digital game I term dataveillance simulation games, which position users as surveillance agents in ambiguously dystopian nation-states and force users to use their own critical thinking and judgment to construct the criminality of state-sanctioned surveillance targets. Project Perfect Citizen (Bad Cop Studios 2016), Orwell: Keeping an Eye on You (Osmotic Studios 2016), and Papers, Please (Lucas Pope 2013) all create a dual empathy: players empathize with bureaucratic surveillance agents while empathizing with surveillance targets whose emails, text messages, documents, and social media profiles reveal them to be “normal” people. I argue that while these games show criminality to be a construct, they also utilize a racialized fear of the loss of one’s individual privacy to make players feel like they too could be surveillance targets.
Chapter four examines personalized digital documentaries that turn users and their data into the subject matter. Do Not Track (Brett Gaylor 2015), A Week with Wanda (Joe Derry Hall 2019), Stealing Ur Feelings (Noah Levenson 2019), Alfred Premium (Joël Ronez, Pierre Corbinais, and Émilie F. Grenier 2019), How They Watch You (Nick Briz 2021), and Fairly Intelligent™ (A.M. Darke 2021) track, monitor, and confront users with their own online behavior to reflect back a corporate surveillance that collects, analyzes, and exploits user data for profit. These digital documentaries utilize emotional fear- and humor-based appeals to persuade users that these technologies are controlling them, shaping their desires and needs, and dehumanizing them through algorithmic surveillance
An examination of strategies employed by female protagonists to confront victimhood in domestic noir
This research examines how the female protagonists of domestic noir shed their victimhood and regain their agency, exploring how the concepts of female victimhood, female violence and female agency are portrayed in domestic noir. As domestic noir is a relatively new subgenre that emerged in 2012, there is still little research to be found, especially in terms of female victimhood and the depiction of femininities and masculinities of its protagonists. This study analyses the heterosexual marriages in three domestic noir novels, Gone Girl by Gillian Flynn (2012), The Silent Wife by A. S. A. Harrison (2013) and The Girl on the Train by Paula Hawkins (2015), exploring how the four major aspects of inversion of normative feminine and masculine behaviour, gender performance and masquerade, female victimhood and agency, and the relationship between gender and violence are portrayed in these novels. In exploring these aspects, this research aims to identify whether the female protagonists of domestic noir actually subvert patriarchal gender norms and the norms of traditional crime fiction, if they successfully shed their victimhood and whether their victimhood and violence categorise them as heroines, anti-heroines or villainesses.
Drawn from the existing literature on domestic noir and the reading of many domestic noir novels, this study suggests that the female protagonists of domestic noir employ five main strategies when seeking to escape their victimhood and regain their agency: gender performance, masquerade, inversion of normative feminine and masculine behaviour, recognising their victimhood, and violence. This study indicates that the female protagonists enact hegemonic masculinities, thereby threatening their male partners and the patriarchy, and are therefore labelled as pariah femininities, which results in male spousal abuse. To avoid this, the female protagonists engage in gender performances and masquerades of hegemonic femininity/hyperfemininity. However, despite being victims, the female protagonists are often simultaneously perpetrators of violence too. Furthermore, while the female protagonists do not employ the five strategies in a linear manner, they must do so to successfully shed their victimhood and regain their agency, and recognising their victimhood is the most crucial step in this process. Nevertheless, all female protagonists do not successfully shed their victimhood and regain their agency, and of the ones that do, the degree to which they become agentic varies from female protagonist to protagonist.
My research provides a framework to analyse domestic noir novels that focus on heterosexual marriages and romantic relationships, by analysing the inter-connected aspects of femininities and masculinities in domestic noir, the use of gender performance and masquerade, the concepts of female victimhood and agency and the relationship between gender and violence through examining how they employ the five strategies of gender performance, masquerade, inversion of normative feminine and masculine behaviour, recognising their victimhood and violence. While these aspects have been explored in the existing literature, this research provides a more in-depth analysis, by focussing on the intersections between the four major aspects of femininities and masculinities, gender performance, masquerade, female victimhood and agency, and violence in domestic noir
Elements of Ion Linear Accelerators, Calm in The Resonances, Other_Tales
The main part of this book, Elements of Linear Accelerators, outlines in Part
1 a framework for non-relativistic linear accelerator focusing and accelerating
channel design, simulation, optimization and analysis where space charge is an
important factor. Part 1 is the most important part of the book; grasping the
framework is essential to fully understand and appreciate the elements within
it, and the myriad application details of the following Parts. The treatment
concentrates on all linacs, large or small, intended for high-intensity, very
low beam loss, factory-type application. The Radio-Frequency-Quadrupole (RFQ)
is especially developed as a representative and the most complicated linac form
(from dc to bunched and accelerated beam), extending to practical design of
long, high energy linacs, including space charge resonances and beam halo
formation, and some challenges for future work. Also a practical method is
presented for designing Alternating-Phase- Focused (APF) linacs with long
sequences and high energy gain. Full open-source software is available. The
following part, Calm in the Resonances and Other Tales, contains eyewitness
accounts of nearly 60 years of participation in accelerator technology.
(September 2023) The LINACS codes are released at no cost and, as always,with
fully open-source coding. (p.2 & Ch 19.10)Comment: 652 pages. Some hundreds of figures - all images, there is no data in
the figures. (September 2023) The LINACS codes are released at no cost and,
as always,with fully open-source coding. (p.2 & Ch 19.10
Transactional memory for high-performance embedded systems
The increasing demand for computational power in embedded systems, which is required for various tasks, such as autonomous driving, can only be achieved by exploiting the resources offered by modern hardware. Due to physical limitations, hardware manufacturers have moved to increase the number of cores per processor instead of further increasing clock rates. Therefore, in our view, the additionally required computing power can only be achieved by exploiting parallelism. Unfortunately writing parallel code is considered a difficult and complex task.
Hardware Transactional Memories (HTMs) are a suitable tool to write sophisticated parallel software. However, HTMs were not specifically developed for embedded systems and therefore cannot be used without consideration. The use of conventional HTMs increases complexity and makes it more difficult to foresee implications with other important properties of embedded systems.
This thesis therefore describes how an HTM for embedded systems could be implemented. The HTM was designed to allow the parallel execution of software and to offer functionality which is useful for embedded systems. Hereby the focus lay on: elimination of the typical limitations of conventional HTMs, several conflict resolution mechanisms, investigation of real time behavior, and a feature to conserve energy.
To enable the desired functionalities, the structure of the HTM described in this work strongly differs from a conventional HTM. In comparison to the baseline HTM, which was also designed and implemented in this thesis, the biggest adaptation concerns the conflict detection. It was modified so that conflicts can be detected and resolved centrally. For this, the cache hierarchy as well as the cache coherence had to be adapted and partially extended.
The system was implemented in the cycle-accurate gem5 simulator. The eight benchmarks of the STAMP benchmark suite were used for evaluation. The evaluation of the various functionalities shows that the mechanisms work and add value for the operation in embedded systems.Der immer größer werdende Bedarf an Rechenleistung in eingebetteten Systemen, der für verschiedene Aufgaben wie z. B. dem autonomen Fahren benötigt wird, kann nur durch die effiziente Nutzung der zur Verfügung stehenden Ressourcen erreicht werden. Durch physikalische Grenzen sind Prozessorhersteller dazu übergegangen, Prozessoren mit mehreren Prozessorkernen auszustatten, statt die Taktraten weiter anzuheben. Daher kann die zusätzlich benötigte Rechenleistung aus unserer Sicht nur durch eine Steigerung der Parallelität gelingen.
Hardwaretransaktionsspeicher (HTS) erlauben es ihren Nutzern schnell und einfach parallele Programme zu schreiben. Allerdings wurden HTS nicht speziell für eingebettete Systeme entwickelt und sind daher nur eingeschränkt für diese nutzbar. Durch den Einsatz herkömmlicher HTS steigt die Komplexität und es wird somit schwieriger abzusehen, ob andere wichtige Eigenschaften erreicht werden können.
Um den Einsatz von HTS in eingebettete Systeme besser zu ermöglichen, beschreibt diese Arbeit einen konkreten Ansatz. Der HTS wurde hierzu so entwickelt, dass er eine parallele Ausführung von Programmen ermöglicht und Eigenschaften besitzt, welche für eingebettete Systeme nützlich sind. Dazu gehören unter anderem: Wegfall der typischen Limitierungen herkömmlicher HTS, Einflussnahme auf den Konfliktauflösungsmechanismus, Unterstützung einer abschätzbaren Ausführung und eine Funktion, um Energie einzusparen.
Um die gewünschten Funktionalitäten zu ermöglichen, unterscheidet sich der Aufbau des in dieser Arbeit beschriebenen HTS stark von einem klassischen HTS. Im Vergleich zu dem Referenz HTS, der ebenfalls im Rahmen dieser Arbeit entworfen und implementiert wurde, betrifft die größte Anpassung die Konflikterkennung. Sie wurde derart verändert, dass die Konflikte zentral erkannt und aufgelöst werden können. Hierfür mussten die Cache-Hierarchie und Cache-Kohärenz stark angepasst und teilweise erweitert werden.
Das System wurde in einem taktgenauen Simulator, dem gem5-Simulator, umgesetzt. Zur Evaluation wurden die acht Benchmarks der STAMP-Benchmark-Suite eingesetzt. Die Evaluation der verschiedenen Funktionen zeigt, dass die Mechanismen funktionieren und somit einen Mehrwert fĂĽr eingebettete Systeme bieten
Security of Electrical, Optical and Wireless On-Chip Interconnects: A Survey
The advancement of manufacturing technologies has enabled the integration of
more intellectual property (IP) cores on the same system-on-chip (SoC).
Scalable and high throughput on-chip communication architecture has become a
vital component in today's SoCs. Diverse technologies such as electrical,
wireless, optical, and hybrid are available for on-chip communication with
different architectures supporting them. Security of the on-chip communication
is crucial because exploiting any vulnerability would be a goldmine for an
attacker. In this survey, we provide a comprehensive review of threat models,
attacks, and countermeasures over diverse on-chip communication technologies as
well as sophisticated architectures.Comment: 41 pages, 24 figures, 4 table
Simulador para arquitecturas multiprocesador utilizadas en el sector espacial para apoyar el desarrollo de mecanismos software de tolerancia a fallos
La colonizaciĂłn de la Luna y Marte o la minerĂa espacial son ideas que la humanidad ha ido desarrollando durante dĂ©cadas pero que hoy en dĂa parecen estar al alcance de la mano gracias al avance tecnolĂłgico, la nueva carrera espacial en la que China es cada vez más relevante, y al aumento de interĂ©s por parte de gobiernos, agencias y empresas. Parte de estos avances tecnolĂłgicos radican en el uso de hardware y software espacial cada vez más potente y versátil. En cualquier misiĂłn espacial, el software es un componente fundamental que permite configurar de distintas maneras al sistema y manejar las situaciones excepcionales que se puedan dar. Además, en el entorno espacial tanto el hardware como el software necesitan satisfacer unos requisitos de tolerancia a fallos para evitar en mayor medida los errores producidos por la radiaciĂłn. La verificaciĂłn de estos requisitos en sistemas crĂticos consume cada vez una mayor cantidad de recursos dedicados al desarrollo de los sistemas, sobre todo en sistemas multinĂşcleo. En este contexto es necesario el uso de nuevas herramientas para el desarrollo y verificaciĂłn tempranos del software empotrado. Entre estas herramientas se puede incluir la propuesta de este trabajo de tesis, que afronta el problema mediante el uso de tĂ©cnicas de simulaciĂłn e inyecciĂłn de fallos.
Dadas las restricciones temporales en el desarrollo de los sistemas embarcados y los estrictos requisitos de robustez del software espacial, es necesario realizar esta verificaciĂłn en etapas muy tempranas del desarrollo. En un caso ideal, estas tareas serian desempeñadas de forma paralela al desarrollo hardware, permitiendo anticipar discrepancias y problemas en la especificaciĂłn del sistema. Cabe resaltar que la verificaciĂłn de los mecanismos de tolerancia a fallos software puede ser difĂcil o imposible de realizar en el hardware, dado que las tĂ©cnicas de inyecciĂłn de fallos hardware limitan la reproducibilidad de escenarios concretos asi como que los escenarios sean observables y con inyecciĂłn de fallos controlable y transparente.
En esta tesis se describe la investigaciĂłn, desarrollo y uso de la plataforma virtual ``LeonViP-MC'', con capacidad de inyecciĂłn de fallos y que utiliza traducciĂłn dinámica binaria (Dynamic Binary Translation) mediante LLVM y corrutinas para la simulaciĂłn de sistemas multinĂşcleo. Gracias a esta plataforma se puede ejecutar el mismo binario a ejecutar en el hardware real, pero en un entorno controlado y determinista. Esto ha permitido la realizaciĂłn de distintas campanas de inyecciĂłn de fallos que no serĂan viables de otra manera. Su utilizaciĂłn ha permitido demostrar la fiabilidad de las tĂ©cnicas de tolerancia a fallos implementadas tanto en el software de arranque de la Unidad de Control del Instrumento (ICU) del Detector de PartĂculas EnergĂ©ticas (EPD) embarcado en Solar Orbiter asi como de una aplicaciĂłn basada en un canal de comunicaciones ARINC 653 que forma parte del desarrollo de un futuro hipervisor para el sistema multinĂşcleo GR740
Adaptable register file organization for vector processors
Today there are two main vector processors design trends. On the one hand, we have vector processors designed for long vectors lengths such as the SX-Aurora TSUBASA which implements vector lengths of 256 elements (16384-bits). On the other hand, we have vector processors designed for short vectors such as the Fujitsu A64FX that implements vector lengths of 8 elements (512-bit) ARM SVE. However, short vector designs are the most widely adopted in modern chips. This is because, to achieve high-performance with a very high-efficiency, applications executed on long vector designs must feature abundant DLP, then limiting the range of applications. On the contrary, short vector designs are compatible with a larger range of applications. In fact, in the beginnings, long vector length implementations were focused on the HPC market, while short vector length implementations were conceived to improve performance in multimedia tasks. However, those short vector length extensions have evolved to better fit the needs of modern applications. In that sense, we believe that this compatibility with a large range of applications featuring high, medium and low DLP is one of the main reasons behind the trend of building parallel machines with short vectors. Short vector designs are area efficient and are "compatible" with applications having long vectors; however, these short vector architectures are not as efficient as longer vector designs when executing high DLP code.
In this thesis, we propose a novel vector architecture that combines the area and resource efficiency characterizing short vector processors with the ability to handle large DLP applications, as allowed in long vector architectures. In this context, we present AVA, an Adaptable Vector Architecture designed for short vectors (MVL = 16 elements), capable of reconfiguring the MVL when executing applications with abundant DLP, achieving performance comparable to designs for long vectors. The design is based on three complementary concepts. First, a two-stage renaming unit based on a new type of registers termed as Virtual Vector Registers (VVRs), which are an intermediate mapping between the conventional logical and the physical and memory registers. In the first stage, logical registers are renamed to VVRs, while in the second stage, VVRs are renamed to physical registers. Second, a two-level VRF, that supports 64 VVRs whose MVL can be configured from 16 to 128 elements. The first level corresponds to the VVRs mapped in the physical registers held in the 8KB Physical Vector Register File (P-VRF), while the second level represents the VVRs mapped in memory registers held in the Memory Vector Register File (M-VRF). While the baseline configuration (MVL=16 elements) holds all the VVRs in the P-VRF, larger MVL configurations hold a subset of the total VVRs in the P-VRF, and map the remaining part in the M-VRF. Third, we propose a novel two-stage vector issue unit. In the first stage, the second level of mapping between the VVRs and physical registers is performed, while issuing to execute is managed in the second stage.
This thesis also presents a set of tools for designing and evaluating vector architectures. First, a parameterizable vector architecture model implemented on the gem5 simulator to evaluate novel ideas on vector architectures. Second, a Vector Architecture model implemented on the McPAT framework to evaluate power and area metrics. Finally, the RiVEC benchmark suite, a collection of ten vectorized applications from different domains focusing on benchmarking vector microarchitectures.Hoy en dĂa existen dos tendencias principales en el diseño de procesadores vectoriales. Por un lado, tenemos procesadores vectoriales basados en vectores largos como el SX-Aurora TSUBASA que implementa vectores con 256 elementos (16384-bits) de longitud. Por otro lado, tenemos procesadores vectoriales basados en vectores cortos como el Fujitsu A64FX que implementa vectores de 8 elementos (512-bits) de longitud ARM SVE. Sin embargo, los diseños de vectores cortos son los más adoptados en los chips modernos. Esto es porque, para lograr alto rendimiento con muy alta eficiencia, las aplicaciones ejecutadas en diseños de vectores largos deben presentar abundante paralelismo a nivel de datos (DLP), lo que limita el rango de aplicaciones. Por el contrario, los diseños de vectores cortos son compatibles con un rango más amplio de aplicaciones. En sus orĂgenes, implementaciones basadas en vectores largos estaban enfocadas al HPC, mientras que las implementaciones basadas en vectores cortos estaban enfocadas en tareas de multimedia. Sin embargo, esas extensiones basadas en vectores cortos han evolucionado para adaptarse mejor a las necesidades de las aplicaciones modernas. Creemos que esta compatibilidad con un mayor rango de aplicaciones es una de las principales razones de construir máquinas paralelas basadas en vectores cortos. Los diseños de vectores cortos son eficientes en área y son compatibles con aplicaciones que soportan vectores largos; sin embargo, estos diseños de vectores cortos no son tan eficientes como los diseños de vectores largos cuando se ejecuta un cĂłdigo con alto DLP. En esta tesis, proponemos una novedosa arquitectura vectorial que combina la eficiencia de área y recursos que caracteriza a los procesadores vectoriales basados en vectores cortos, con la capacidad de mejorar en rendimiento cuando se presentan aplicaciones con alto DLP, como lo permiten las arquitecturas vectoriales basadas en vectores largos. En este contexto, presentamos AVA, una Arquitectura Vectorial Adaptable basada en vectores cortos (MVL = 16 elementos), capaz de reconfigurar el MVL al ejecutar aplicaciones con abundante DLP, logrando un rendimiento comparable a diseños basados en vectores largos. El diseño se basa en tres conceptos. Primero, una unidad de renombrado de dos etapas basada en un nuevo tipo de registros denominados registros vectoriales virtuales (VVR), que son un mapeo intermedio entre los registros lĂłgicos y fĂsicos y de memoria. En la primera etapa, los registros lĂłgicos se renombran a VVR, mientras que, en la segunda etapa, los VVR se renombran a registros fĂsicos. En segundo lugar, un VRF de dos niveles, que admite 64 VVR cuyo MVL se puede configurar de 16 a 128 elementos. El primer nivel corresponde a los VVR mapeados en los registros fĂsicos contenidos en el banco de registros vectoriales fĂsico (P-VRF) de 8 KB, mientras que el segundo nivel representa los VVR mapeados en los registros de memoria contenidos en el banco de registros vectoriales de memoria (M-VRF). Mientras que la configuraciĂłn de referencia (MVL=16 elementos) contiene todos los VVR en el P-VRF, las configuraciones de MVL más largos contienen un subconjunto del total de VVR en el P-VRF y mapean la parte restante en el M-VRF. En tercer lugar, proponemos una novedosa unidad de colas de emisiĂłn de dos etapas. En la primera etapa se realiza el segundo nivel de mapeo entre los VVR y los registros fĂsicos, mientras que en la segunda etapa se gestiona la emisiĂłn de instrucciones a ejecutar. Esta tesis tambiĂ©n presenta un conjunto de herramientas para diseñar y evaluar arquitecturas vectoriales. Primero, un modelo de arquitectura vectorial parametrizable implementado en el simulador gem5 para evaluar novedosas ideas. Segundo, un modelo de arquitectura vectorial implementado en McPAT para evaluar las mĂ©tricas de potencia y área. Finalmente, presentamos RiVEC, una colecciĂłn de diez aplicaciones vectorizadas enfocadas en evaluar arquitecturas vectorialesPostprint (published version
Robust and secure resource management for automotive cyber-physical systems
2022 Spring.Includes bibliographical references.Modern vehicles are examples of complex cyber-physical systems with tens to hundreds of interconnected Electronic Control Units (ECUs) that manage various vehicular subsystems. With the shift towards autonomous driving, emerging vehicles are being characterized by an increase in the number of hardware ECUs, greater complexity of applications (software), and more sophisticated in-vehicle networks. These advances have resulted in numerous challenges that impact the reliability, security, and real-time performance of these emerging automotive systems. Some of the challenges include coping with computation and communication uncertainties (e.g., jitter), developing robust control software, detecting cyber-attacks, ensuring data integrity, and enabling confidentiality during communication. However, solutions to overcome these challenges incur additional overhead, which can catastrophically delay the execution of real-time automotive tasks and message transfers. Hence, there is a need for a holistic approach to a system-level solution for resource management in automotive cyber-physical systems that enables robust and secure automotive system design while satisfying a diverse set of system-wide constraints. ECUs in vehicles today run a variety of automotive applications ranging from simple vehicle window control to highly complex Advanced Driver Assistance System (ADAS) applications. The aggressive attempts of automakers to make vehicles fully autonomous have increased the complexity and data rate requirements of applications and further led to the adoption of advanced artificial intelligence (AI) based techniques for improved perception and control. Additionally, modern vehicles are becoming increasingly connected with various external systems to realize more robust vehicle autonomy. These paradigm shifts have resulted in significant overheads in resource constrained ECUs and increased the complexity of the overall automotive system (including heterogeneous ECUs, network architectures, communication protocols, and applications), which has severe performance and safety implications on modern vehicles. The increased complexity of automotive systems introduces several computation and communication uncertainties in automotive subsystems that can cause delays in applications and messages, resulting in missed real-time deadlines. Missing deadlines for safety-critical automotive applications can be catastrophic, and this problem will be further aggravated in the case of future autonomous vehicles. Additionally, due to the harsh operating conditions (such as high temperatures, vibrations, and electromagnetic interference (EMI)) of automotive embedded systems, there is a significant risk to the integrity of the data that is exchanged between ECUs which can lead to faulty vehicle control. These challenges demand a more reliable design of automotive systems that is resilient to uncertainties and supports data integrity goals. Additionally, the increased connectivity of modern vehicles has made them highly vulnerable to various kinds of sophisticated security attacks. Hence, it is also vital to ensure the security of automotive systems, and it will become crucial as connected and autonomous vehicles become more ubiquitous. However, imposing security mechanisms on the resource constrained automotive systems can result in additional computation and communication overhead, potentially leading to further missed deadlines. Therefore, it is crucial to design techniques that incur very minimal overhead (lightweight) when trying to achieve the above-mentioned goals and ensure the real-time performance of the system. We address these issues by designing a holistic resource management framework called ROSETTA that enables robust and secure automotive cyber-physical system design while satisfying a diverse set of constraints related to reliability, security, real-time performance, and energy consumption. To achieve reliability goals, we have developed several techniques for reliability-aware scheduling and multi-level monitoring of signal integrity. To achieve security objectives, we have proposed a lightweight security framework that provides confidentiality and authenticity while meeting both security and real-time constraints. We have also introduced multiple deep learning based intrusion detection systems (IDS) to monitor and detect cyber-attacks in the in-vehicle network. Lastly, we have introduced novel techniques for jitter management and security management and deployed lightweight IDSs on resource constrained automotive ECUs while ensuring the real-time performance of the automotive systems
Explaining Shifts in White Racial Liberalism: The Role of Collective Moral Emotions and Media Effects
This dissertation seeks to understand the causes of racial liberalism among white Americans, including overtime shifts therein. Drawing on intergroup emotions theory from social psychology, I propose that negative ingroup-focused moral emotions—namely white shame and guilt—are important factors in the formation of racially liberal attitudes, such as white support for race-based affirmative action and government assistance. I further argue that not all whites are equally susceptible to such emotions; that those inclined towards structural attributions for inequality (e.g. white liberals) are more likely to experience them; and that the racial attitudes of such whites are thus more elastic than those of others. Finally, I contend that the salience of these emotions varies as a function of the availability of racial equalitarian media messaging that speaks to black-white status differences in terms of past and/or present white racism. Using cross-sectional, time series, panel, and experimental data, I test these propositions across multiple empirical chapters. I find general support for the theory across multiple methodologies. In the main, the findings suggest that, net of other attitudinally important variables (e.g. racial resentment, social dominance orientation), white racial attitudes would be far more conservative in the absence of collective shame and guilt; that overtime increases in white racial liberalism temporally follow increases in the availability of racial equalitarian media messaging, particularly among white liberals and Democrats; and that racial equalitarian media messaging elicits white shame and guilt, which, in turn, increase the expression of racially liberal attitudes and policy preferences. Taken as a whole, the findings have important implications for the existing literature on white racial attitudes, which remains overwhelmingly focused on negative or prejudicial intergroup orientations
Art from Home to School: Towards a Critical Art Education Curriculum Framework in Postcolonial and Globalisation Contexts for Primary School Level in Uganda
Art from Home to School is an investigation, which examined various aspects to transform the school curriculum, restore a stronger sense of historical cultural awareness; promote tolerance and cultural diversity through art education at primary school level in Uganda. Art from Home to School argues against censored cultural heritage; earmarked as indigenous art and mother tongue use in primary schools of Uganda. It provides an inquiry into colonial and postcolonial educational policies that promote a Euro centered school curriculum that stresses rote learning, encourages school violence through corporal punishment and ultimately that may result in physical abuse, along with dropping out of school. Further, Art from Home to School attends to other antagonisms in the society and school where the student persists; which cause socioeconomic inequalities and exploitation by reason of globalisation in education. It builds its knowledge base on Paulo Freire's Pedagogy of the Oppressed to transform teaching and learning focused on social change. In it, ethnographic research was used to review art works produced by students as resistance to previously silenced voices and obtained results were used to plan a hypothetical critical curriculum of art education suggesting a captured vision of decolonising reforms
- …