880 research outputs found

    The Evolution of the Retail Landscape

    Get PDF
    If the city is a theatre of social interaction (Mumford 1996), then one of the principle stage sets is the retail landscape. Retail districts are generally where people congregate, making places of shopping among the liveliest areas the city. In addition to being social settings, retail areas are also where a large component of the city’s economy is transacted, and they are implicated in the political dramas of the city, particularly those dealing with issues of growth and development. Retail shops are highly visible elements of the urban landscape, lining principle arteries and clustering at major transit nodes. Retailing is woven throughout the economic, social, political, and built fabrics of the city. The evolution of the retail landscape was studied throughout the development of London, Ontario, a typical mid-sized North American city. The functional and spatial composition of the retail sector was documented from the first settlement, thru the era of rapid industrialization, to today’s consumption-based city. Over time, the retail landscape exhibited much dynamism, reflecting changing socio-economic conditions, as well as technological innovation. Both the retailers themselves, and the environments in which their businesses were conducted, have evolved. From the primitive general store, thru the grand emporia lining ‘mainstreet’, to the contemporary planned shopping centres. Comparisons were made between the physical characteristics of the built environments constructed in various eras which make up the retail landscape. Drawing from the urban morphology literature (notably Conzen 1960), analysis was conducted of the town-plan, building forms, and land-uses of the various retail environments. In addition to documenting the general changes in these town-scape elements over time, further analysis was conducted on the form and function of the archetypical retail environments, the traditional ‘mainstreet’ district and contemporary shopping centres. Although they differ in many ways, a common logic was found in all retail landscapes, united through the drive for profit maximization by the retailers who shape their environments in striving towards this goal. Theoretical advancements to the field of urban morphology are presented, arguing that it is necessary to consider all elements of the town-scape in unison when describing the character of urban environments. A trialectic is proposed, taking into account how each of these elements simultaneously shapes and is shaped by the other two

    Personalizing the web: A tool for empowering end-users to customize the web through browser-side modification

    Get PDF
    167 p.Web applications delegate to the browser the final rendering of their pages. Thispermits browser-based transcoding (a.k.a. Web Augmentation) that can be ultimately singularized for eachbrowser installation. This creates an opportunity for Web consumers to customize their Web experiences.This vision requires provisioning adequate tooling that makes Web Augmentation affordable to laymen.We consider this a special class of End-User Development, integrating Web Augmentation paradigms.The dominant paradigm in End-User Development is scripting languages through visual languages.This thesis advocates for a Google Chrome browser extension for Web Augmentation. This is carried outthrough WebMakeup, a visual DSL programming tool for end-users to customize their own websites.WebMakeup removes, moves and adds web nodes from different web pages in order to avoid tabswitching, scrolling, the number of clicks and cutting and pasting. Moreover, Web Augmentationextensions has difficulties in finding web elements after a website updating. As a consequence, browserextensions give up working and users might stop using these extensions. This is why two differentlocators have been implemented with the aim of improving web locator robustness

    Feedback-Driven Data Clustering

    Get PDF
    The acquisition of data and its analysis has become a common yet critical task in many areas of modern economy and research. Unfortunately, the ever-increasing scale of datasets has long outgrown the capacities and abilities humans can muster to extract information from them and gain new knowledge. For this reason, research areas like data mining and knowledge discovery steadily gain importance. The algorithms they provide for the extraction of knowledge are mandatory prerequisites that enable people to analyze large amounts of information. Among the approaches offered by these areas, clustering is one of the most fundamental. By finding groups of similar objects inside the data, it aims to identify meaningful structures that constitute new knowledge. Clustering results are also often used as input for other analysis techniques like classification or forecasting. As clustering extracts new and unknown knowledge, it obviously has no access to any form of ground truth. For this reason, clustering results have a hypothetical character and must be interpreted with respect to the application domain. This makes clustering very challenging and leads to an extensive and diverse landscape of available algorithms. Most of these are expert tools that are tailored to a single narrowly defined application scenario. Over the years, this specialization has become a major trend that arose to counter the inherent uncertainty of clustering by including as much domain specifics as possible into algorithms. While customized methods often improve result quality, they become more and more complicated to handle and lose versatility. This creates a dilemma especially for amateur users whose numbers are increasing as clustering is applied in more and more domains. While an abundance of tools is offered, guidance is severely lacking and users are left alone with critical tasks like algorithm selection, parameter configuration and the interpretation and adjustment of results. This thesis aims to solve this dilemma by structuring and integrating the necessary steps of clustering into a guided and feedback-driven process. In doing so, users are provided with a default modus operandi for the application of clustering. Two main components constitute the core of said process: the algorithm management and the visual-interactive interface. Algorithm management handles all aspects of actual clustering creation and the involved methods. It employs a modular approach for algorithm description that allows users to understand, design, and compare clustering techniques with the help of building blocks. In addition, algorithm management offers facilities for the integration of multiple clusterings of the same dataset into an improved solution. New approaches based on ensemble clustering not only allow the utilization of different clustering techniques, but also ease their application by acting as an abstraction layer that unifies individual parameters. Finally, this component provides a multi-level interface that structures all available control options and provides the docking points for user interaction. The visual-interactive interface supports users during result interpretation and adjustment. For this, the defining characteristics of a clustering are communicated via a hybrid visualization. In contrast to traditional data-driven visualizations that tend to become overloaded and unusable with increasing volume/dimensionality of data, this novel approach communicates the abstract aspects of cluster composition and relations between clusters. This aspect orientation allows the use of easy-to-understand visual components and makes the visualization immune to scale related effects of the underlying data. This visual communication is attuned to a compact and universally valid set of high-level feedback that allows the modification of clustering results. Instead of technical parameters that indirectly cause changes in the whole clustering by influencing its creation process, users can employ simple commands like merge or split to directly adjust clusters. The orchestrated cooperation of these two main components creates a modus operandi, in which clusterings are no longer created and disposed as a whole until a satisfying result is obtained. Instead, users apply the feedback-driven process to iteratively refine an initial solution. Performance and usability of the proposed approach were evaluated with a user study. Its results show that the feedback-driven process enabled amateur users to easily create satisfying clustering results even from different and not optimal starting situations

    Geological evaluation of posidonia and wealden organic-rich shales : geochemical and diagenetic controls on pore system evolution

    Get PDF
    PhD ThesisFree gas in shales occurs mainly in larger mesopores (width >6 nm) and macropores (width >50 nm) and is likely to be the first or even main contributor to gas production. Because evaluation of the storage capacity and final recovery of gas depends on distribution and connectivity of these pores, their correct quantification has become a focus point of advanced research. A major step for understanding pore systems in organic rich shales was made by recognition that under increasing thermal stress, decomposition of kerogen should progressively lead to development of organic porosity. Despite this, many questions concerning fate of organic porosity in organic rich rocks still remain unresolved. To date, several important attempts to link evolution of organic pores with maturation and organic matter content gave inconclusive and contradictory results. In this study, pore systems of the Lower Jurassic Posidonia and Lower Cretaceous Wealden shale, representing different mudrock types and covering a range of maturities, have been characterised. By integrating geochemical and petrophysical measurements, and with a detailed analysis of microscopic images we offered a unique approach for measuring porosity and pore characteristics on micrometre and centimetre scales with thorough understanding for a micrometer lithological variation. Key aims were to quantify the evolution of porosity associated with both organic matter and inorganic rock matrix as a function of maturity, and address the influence of mudrock heterogeneity on porosity change. Our experiments revealed a non-linear trend of porosity change with maturity in pores of all sizes, with an initial drop in the oil window as a result of mechanical compaction, chemical diagenesis, as well as pore-filling oil and bitumen. At comparable maturities, porosity and distribution of pores depend on the content of clays, organic matter, microfossils, silt grains and pore filling cement. In both Posidonia and Wealden, macropores (> 50 nm) account for merely up to 20% of total porosity physically measured, with the lowest percentage in the least mature samples. It was also demonstrated that gas sorption micropores are controlled by the amount of organic matter and clay minerals, and thus their microporous nature was confirmed. In terms of organic porosity development, we provided evidence that organic matter content and the path of its thermal decomposition control total porosities of the gas window shale. Importantly, neoformed intraorganic porosity is highly heterogeneous with 35% of organic particles containing visible pores (> 6 nm in diameter), and porosities of individual particles ranging from 0–50%. As a key result, we confirmed that porous zones in the gas window are associated with sites of bitumen retention and degradation. That indicates iii that the location of potential reservoirs of free gas should be linked to rigid zones, such as fossiliferous faecal pellets, or compaction shadows of mineral grains. Combined mercury injection and SEM data also showed that visible but potentially isolated macropores are connected, but only through throats below 20 nm. With the evolution of the porous network of bitumen saturating the shale matrix in the gas window, connectivity of the system changes from inorganic to organic dominated. The size of the pore throats, and the connectivity of the organic system in shales are likely key controls on the delivery of gas from pore to fracture and then to wellbore.Gas Shales in Europe projec

    Computational studies of vascularized tumors

    Get PDF
    Cancer is a hard problem touching numerous branches of life science. One reason for the complexity of cancer is that tumors act across many different time and length scales ranging from the subcellular to the macroscopic level. Modern sciences still lack an integral understanding of cancer, however in recent years, increasing computational power enabled computational models to accompany and support conventional medical and biological methods bridging the scales from micro to macro. Here I report a multiscale computational model simulating the progression of solid tumors comprising the vasculature mimicked by artificial arterio-venous blood vessel networks. I present a numerical optimization procedure to determine radii of blood vessels in an artificial microcirculation based on physiological stimuli independently of Murray’s law. Comprising the blood vessels, the reported model enables the inspection of blood vessel remodeling dynamics (angiogenesis, vaso-dilation, vessel regression and collapse) during tumor growth. We successfully applied the method to simulated tumor blood vessel networks guided by optical mammography data. In subsequent model development, I included cellular details into the method enabling a computational study of the tumor microenvironment at cellular resolution. I found that small vascularized tumors at the angiogenic switch exhibit a large ecological niche diversity resulting in high evolutionary pressure favoring the colonal selecion hypothesis.Krebs ist ein schwieriges Thema und tritt in zahlreichen Gebieten auf. Ein Grund für die Komplexität des Tumorwachstums sind die unterschiedlichen Zeit- und Längenskalen. In der aktuellen Forschung fehlt immernoch ein ganzheitliches Verständnis von Krebs, obwohl die computergestützten Methoden in den vergangenen Jahren die konventionellen Methoden der Medizin und der Biologie erweitern und unterstützen. Damit wird die Kluft zwischen subzellulären und makroskopischen Prozessen bereits verringert. In der vorliegenden Arbeit dokumentiere ich ein computergestütztes Verfahren, welches das Tumorwachstum auf mehreren Skalen simuliert. Insbesondere wird das Blutgefäßsystem durch künstliche Gefäße nachgeahmt. Es wurde ein numerisches Optimierungsverfahren zur Bestimmung der Gefäßradien eines künstlichen Blutkreislaufes entwickelt, welches auf physiologischen Reizen basiert und unabhängig von Murray‘s Gesetz ist. Da das beschriebene Verfahren zur Simulation von Tumoren Blutgefäße beinhaltet, kann die Umbildung des Gefäßbaumes während des Tumorwachstums untersucht werden. Das Modell wurde erfolgreich mit krankhaften Gefäßsystemen verglichen. In der darauffolgenden Weiterentwicklung des Modells berücksichtigte ich zelluläre Feinheiten, die es mir erlaubten das Mikromilieu in zellulärer Auflösung zu untersuchen. Meine Resultate zeigen, dass bereits kleine Tumore eine hohe ökologische Vielfalt besitzen, was den Selektionsdruck erhöht und damit die Klon-Selektionstheorie begünstigt

    Festivals as a human adaptation of public space

    Get PDF
    Thesis (M.C.P.)--Massachusetts Institute of Technology, Dept. of Urban Studies and Planning, 2009.Author also earned an Urban Design Certificate from the Program in Urban Design; a joint graduate program with the Dept. of Architecture and the Dept. of Urban Studies and Planning. Vita.Includes bibliographical references (p. 174-179).As currently conceived, the contemporary city will not advance beyond its present level of achievement. This research frames the city within three root values upon which all decisions made in the city are based. The three root values are continuity, connection and openness. Under the present priorities of city making, the contemporary city is heavily biased toward continuity. A paradigm shift is required in the way cities are conceived and developed to rebalance the three root values with the intention of creating cities that are better places for humans to inhabit. This shift is a call for a more human city. This research investigates a collection of urban design principles that are intended to humanize the city and improve them as settings for human use and occupation. The research utilizes the festival as a temporal moment in the city of uniquely human-centered use. It is a moment in which the human becomes the dominant priority in the organization and occupation of space, while other systems of the city are temporarily interrupted. Through a series of six festival case studies a number of consistent adaptations of space emerge in which the festive events highlight strategies for humanizing space in the city. The urban design principles highlighted by this research include adapting spatial containment, restructuring movement, exposing meaning and commonality, attracting density of people, removing separation of uses, increasing overlapping activities, and spatially and temporally scripting and choreographing all of these strategies.(cont.) These principles are then examined through a design test that shows their applicability in making humanizing adaptations of space and ultimately creating more human cities.by Joshua Charles Fiala.M.C.P

    DISJUNCTURE AMONG CLASSIC PERIOD CULTURAL LANDSCAPES IN THE TUXTLA MOUNTAINS, SOUTHERN VERACRUZ, MEXICO

    Get PDF
    Teotihuacan was the most influential city in the Classic Mesoamerican worldsystem. Like other influential cities in the ancient world, however, Teotihuacan did not homogenously affect the various cultural landscapes that thrived in Mesoamerica during the Classic period (300-900 CE). Even where strong central Mexican influences appear outside the Basin of Mexico, the nature, extent, and strength of these influences are discontinuous over time and space. Every place within the Classic Mesoamerican landscape has a unique Teotihuacan story. In the Tuxtla Mountains of southern Veracruz, Mexico, Matacapan, located in the Catemaco Valley, drew heavily upon ideas and symbols fostered at Teotihuacan, while Totocapan, a peer political capital located in the neighboring Tepango Valley, emphasized social institutions well-entrenched within Gulf Coast cultural traditions. Through a detailed comparison of these two river valleys, I demonstrate that each polity developed along different trajectories. By the Middle Classic (450-650 CE) each polity displayed different political, economic, and ritual institutions. While they shared an underlying material culture style, the data suggest that the regimes of both polities promoted a different ideology. These cultural divergences did not, however, cause hostilities between them. To the contrary, compositional sourcing of Coarse Orange jars indicates that they engaged in material exchanges with each other. Agents at each settlement within the study region made unique decisions with regard to their involvement in local, regional, and macroregional interaction networks, particularly with regard to the adoption or rejection of Teotihuacan cultural elements. As a result, the Classic period Tuxtlas comprised multiple overlapping, but disjoint, landscapes of interaction. Places of human settlement were nodes on the landscape where these disjoint landscapes intersected in space and time. By examining these disjunctures, world-system studies can reveal a trend of increasing cultural diversity that parallels the better-theorized trend of homogenization emphasized by core-periphery models. In this dissertation, I take the initial steps toward developing an archaeology of disjuncture that examines the cultural variability that develops where groups across the landscape employ different strategies of interaction within the world-system

    Advanced energy management strategies for HVAC systems in smart buildings

    Get PDF
    The efficacy of the energy management systems at dealing with energy consumption in buildings has been a topic with a growing interest in recent years due to the ever-increasing global energy demand and the large percentage of energy being currently used by buildings. The scale of this sector has attracted research effort with the objective of uncovering potential improvement avenues and materializing them with the help of recent technological advances that could be exploited to lower the energetic footprint of buildings. Specifically, in the area of heating, ventilating and air conditioning installations, the availability of large amounts of historical data in building management software suites makes possible the study of how resource-efficient these systems really are when entrusted with ensuring occupant comfort. Actually, recent reports have shown that there is a gap between the ideal operating performance and the performance achieved in practice. Accordingly, this thesis considers the research of novel energy management strategies for heating, ventilating and air conditioning installations in buildings, aimed at narrowing the performance gap by employing data-driven methods to increase their context awareness, allowing management systems to steer the operation towards higher efficiency. This includes the advancement of modeling methodologies capable of extracting actionable knowledge from historical building behavior databases, through load forecasting and equipment operational performance estimation supporting the identification of a building’s context and energetic needs, and the development of a generalizable multi-objective optimization strategy aimed at meeting these needs while minimizing the consumption of energy. The experimental results obtained from the implementation of the developed methodologies show a significant potential for increasing energy efficiency of heating, ventilating and air conditioning systems while being sufficiently generic to support their usage in different installations having diverse equipment. In conclusion, a complete analysis and actuation framework was developed, implemented and validated by means of an experimental database acquired from a pilot plant during the research period of this thesis. The obtained results demonstrate the efficacy of the proposed standalone contributions, and as a whole represent a suitable solution for helping to increase the performance of heating, ventilating and air conditioning installations without affecting the comfort of their occupants.L’eficàcia dels sistemes de gestió d’energia per afrontar el consum d’energia en edificis és un tema que ha rebut un interès en augment durant els darrers anys a causa de la creixent demanda global d’energia i del gran percentatge d’energia que n’utilitzen actualment els edificis. L’escala d’aquest sector ha atret l'atenció de nombrosa investigació amb l’objectiu de descobrir possibles vies de millora i materialitzar-les amb l’ajuda de recents avenços tecnològics que es podrien aprofitar per disminuir les necessitats energètiques dels edificis. Concretament, en l’àrea d’instal·lacions de calefacció, ventilació i climatització, la disponibilitat de grans bases de dades històriques als sistemes de gestió d’edificis fa possible l’estudi de com d'eficients són realment aquests sistemes quan s’encarreguen d'assegurar el confort dels seus ocupants. En realitat, informes recents indiquen que hi ha una diferència entre el rendiment operatiu ideal i el rendiment generalment assolit a la pràctica. En conseqüència, aquesta tesi considera la investigació de noves estratègies de gestió de l’energia per a instal·lacions de calefacció, ventilació i climatització en edificis, destinades a reduir la diferència de rendiment mitjançant l’ús de mètodes basats en dades per tal d'augmentar el seu coneixement contextual, permetent als sistemes de gestió dirigir l’operació cap a zones de treball amb un rendiment superior. Això inclou tant l’avanç de metodologies de modelat capaces d’extreure coneixement de bases de dades de comportaments històrics d’edificis a través de la previsió de càrregues de consum i l’estimació del rendiment operatiu dels equips que recolzin la identificació del context operatiu i de les necessitats energètiques d’un edifici, tant com del desenvolupament d’una estratègia d’optimització multi-objectiu generalitzable per tal de minimitzar el consum d’energia mentre es satisfan aquestes necessitats energètiques. Els resultats experimentals obtinguts a partir de la implementació de les metodologies desenvolupades mostren un potencial important per augmentar l'eficiència energètica dels sistemes de climatització, mentre que són prou genèrics com per permetre el seu ús en diferents instal·lacions i suportant equips diversos. En conclusió, durant aquesta tesi es va desenvolupar, implementar i validar un marc d’anàlisi i actuació complet mitjançant una base de dades experimental adquirida en una planta pilot durant el període d’investigació de la tesi. Els resultats obtinguts demostren l’eficàcia de les contribucions de manera individual i, en conjunt, representen una solució idònia per ajudar a augmentar el rendiment de les instal·lacions de climatització sense afectar el confort dels seus ocupantsPostprint (published version

    Irish Machine Vision and Image Processing Conference Proceedings 2017

    Get PDF

    FRIEND: A Cyber-Physical System for Traffic Flow Related Information Aggregation and Dissemination

    Get PDF
    The major contribution of this thesis is to lay the theoretical foundations of FRIEND — A cyber-physical system for traffic Flow-Related Information aggrEgatioN and Dissemination. By integrating resources and capabilities at the nexus between the cyber and physical worlds, FRIEND will contribute to aggregating traffic flow data collected by the huge fleet of vehicles on our roads into a comprehensive, near real-time synopsis of traffic flow conditions. We anticipate providing drivers with a meaningful, color-coded, at-a-glance view of flow conditions ahead, alerting them to congested traffic. FRIEND can be used to provide accurate information about traffic flow and can be used to propagate this information. The workhorse of FRIEND is the ubiquitous lane delimiters (a.k.a. cat\u27s eyes) on our roadways that, at the moment, are used simply as dumb reflectors. Our main vision is that by endowing cat\u27s eyes with a modest power source, detection and communication capabilities they will play an important role in collecting, aggregating and disseminating traffic flow conditions to the driving public. We envision the cat\u27s eyes system to be supplemented by road-side units (RSU) deployed at regular intervals (e.g. every kilometer or so). The RSUs placed on opposite sides of the roadway constitute a logical unit and are connected by optical fiber under the median. Unlike inductive loop detectors, adjacent RSUs along the roadway are not connected with each other, thus avoiding the huge cost of optical fiber. Each RSU contains a GPS device (for time synchronization), an active Radio Frequency Identification (RFID) tag for communication with passing cars, a radio transceiver for RSU to RSU communication and a laptop-class computing device. The physical components of FRIEND collect traffic flow-related data from passing vehicles. The collected data is used by FRIEND\u27s inference engine to build beliefs about the state of the traffic, to detect traffic trends, and to disseminate relevant traffic flow-related information along the roadway. The second contribution of this thesis is the development of an incident classification and detection algorithm that can be used to classify different types of traffic incident Then, it can notify the necessary target of the incident. We also compare our incident detection technique with other VANET techniques. Our third contribution is a novel strategy for information dissemination on highways. First, we aim to prevent secondary accidents. Second, we notify drivers far away from the accident of an expected delay that gives them the option to continue or exit before reaching the incident location. A new mechanism tracks the source of the incident while notifying drivers away from the accident. The more time the incident stays, the further the information needs to be propagated. Furthermore, the denser the traffic, the faster it will backup. In high density highways, an incident may form a backup of vehicles faster than low density highways. In order to satisfy this point, we need to propagate information as a function of density and time
    • …
    corecore