974 research outputs found

    Unreliable Physical Places and Memories as Posthuman Narration in Ishiguro’s Never Let Me Go

    Get PDF
    In this paper, I argue that Ishiguro’s Never Let Me Go is best understood through analysis of its unstable places and the narrator’s unstable memory.  Through these devices, Ishiguro constructs a panoptic state of surveillance, transforming an otherwise non-urban space into a pseudo-cityscape.  It is through the narrator’s interactions and memories of her interactions with these urbanized and controlled spaces that the reader can truly understand and engage with this posthuman narrative. Without fully understanding the ways in which rural places function as cityscapes for the clone characters of this novel, the reader is unable to meaningfully understand the experiences of the clones. This paper employs theories of Edward W. Soja in order to advance discussion of this novel beyond its application of the panoptic mechanism. It also looks closely at the ways the memories of the displaced are used to manipulate the concept of place and its function throughout the novel

    Semi-partitioned scheduling and task migration in dataflow networks

    Get PDF
      This thesis proposes design methodologies and techniques in the context of embedded computing systems. In particular, it focuses on embedded streaming systems, i.e., systems that process a continuous, possibly infinite stream of data from the environment. Typical examples of such systems are audio and video encoders and decoders. In order to achieve higher performance, nowadays embedded streaming systems are often implemented on execution platforms that contain multiple processors on a single chip. These execution platforms are called Multi-Processor Systems-on-Chip (MPSoCs). To exploit the parallelism available in MPSoCs, applications have to be decomposed in portions (also called tasks) that are inter-dependent, but can be executed in parallel. Each of these tasks is assigned to a certain processor of the system. This assignment of tasks to processors is called spatial scheduling of tasks, or task mapping. This thesis proposes techniques to optimize and adapt at run-time the mapping of tasks to processors, in order to achieve higher processor utilization, or energy efficiency, or to make the system fault tolerant.  Computer Systems, Imagery and Medi

    On the bullwhip avoidance phase: the synchronized supply chain

    Get PDF
    The aim of this paper is to analyse the operational response of a Synchronised Supply Chain (SSC). To do so, first a new mathematical model of a SSC is presented. An exhaustive Latin Square design of experi- ments is adopted in order to perform a boundary variation analysis of the main three parameters of the periodic review smoothing (S,R) order-up-to policy: i.e., lead time, demand smoothing forecasting factor, and proportional controller of the replenishment rule. The model is then evaluated under a variety of performance measures based on internal process benefits and customer benefits. The main results of the analysis are: (I) SSC responds to violent changes in demand by resolving bullwhip effect and by creating stability in inventories under different parameter settings and (II) in a SSC, long production\u2013 distribution lead times could significantly affect customer service level. Both results have important consequences for the design and operation of supply chains

    CULTURAL HERITAGE DIGITAL PRESERVATION THROUGH AI-DRIVEN ROBOTICS

    Get PDF
    This paper introduces a novel methodology developed for creating 3D models of archaeological artifacts that reduces the time and effort required by operators. The approach uses a simple vision system mounted on a robotic arm that follows a predetermined path around the object to be reconstructed. The robotic system captures different viewing angles of the object and assigns 3D coordinates corresponding to the robot's pose, allowing it to adjust the trajectory to accommodate objects of various shapes and sizes. The angular displacement between consecutive acquisitions can also be fine-tuned based on the desired final resolution. This flexible approach is suitable for different object sizes, textures, and levels of detail, making it ideal for both large volumes with low detail and small volumes with high detail. The recorded images and assigned coordinates are fed into a constrained implementation of the structure-from-motion (SfM) algorithm, which uses the scale-invariant features transform (SIFT) method to detect key points in each image. By utilising a priori knowledge of the coordinates and SIFT algorithm, low processing time can be ensured while maintaining high accuracy in the final reconstruction. The use of a robotic system to acquire images at a pre-defined pace ensures high repeatability and consistency across different 3D reconstructions, eliminating operator errors in the workflow. This approach not only allows for comparisons between similar objects but also provides the ability to track structural changes of the same object over time. Overall, the proposed methodology provides a significant improvement over current photogrammetry techniques by reducing the time and effort required to create 3D models while maintaining a high level of accuracy and repeatability

    Energy Efficient Semi-Partitioned Scheduling for Embedded Multiprocessor Streaming Systems

    Get PDF
    Computer Systems, Imagery and Medi
    • …
    corecore