53 research outputs found

    Multi-scale active shape description in medical imaging

    Get PDF
    Shape description in medical imaging has become an increasingly important research field in recent years. Fast and high-resolution image acquisition methods like Magnetic Resonance (MR) imaging produce very detailed cross-sectional images of the human body - shape description is then a post-processing operation which abstracts quantitative descriptions of anatomically relevant object shapes. This task is usually performed by clinicians and other experts by first segmenting the shapes of interest, and then making volumetric and other quantitative measurements. High demand on expert time and inter- and intra-observer variability impose a clinical need of automating this process. Furthermore, recent studies in clinical neurology on the correspondence between disease status and degree of shape deformations necessitate the use of more sophisticated, higher-level shape description techniques. In this work a new hierarchical tool for shape description has been developed, combining two recently developed and powerful techniques in image processing: differential invariants in scale-space, and active contour models. This tool enables quantitative and qualitative shape studies at multiple levels of image detail, exploring the extra image scale degree of freedom. Using scale-space continuity, the global object shape can be detected at a coarse level of image detail, and finer shape characteristics can be found at higher levels of detail or scales. New methods for active shape evolution and focusing have been developed for the extraction of shapes at a large set of scales using an active contour model whose energy function is regularized with respect to scale and geometric differential image invariants. The resulting set of shapes is formulated as a multiscale shape stack which is analysed and described for each scale level with a large set of shape descriptors to obtain and analyse shape changes across scales. This shape stack leads naturally to several questions in regard to variable sampling and appropriate levels of detail to investigate an image. The relationship between active contour sampling precision and scale-space is addressed. After a thorough review of modem shape description, multi-scale image processing and active contour model techniques, the novel framework for multi-scale active shape description is presented and tested on synthetic images and medical images. An interesting result is the recovery of the fractal dimension of a known fractal boundary using this framework. Medical applications addressed are grey-matter deformations occurring for patients with epilepsy, spinal cord atrophy for patients with Multiple Sclerosis, and cortical impairment for neonates. Extensions to non-linear scale-spaces, comparisons to binary curve and curvature evolution schemes as well as other hierarchical shape descriptors are discussed

    Function Classes for Identifiable Nonlinear Independent Component Analysis

    Full text link
    Unsupervised learning of latent variable models (LVMs) is widely used to represent data in machine learning. When such models reflect the ground truth factors and the mechanisms mapping them to observations, there is reason to expect that they allow generalization in downstream tasks. It is however well known that such identifiability guaranties are typically not achievable without putting constraints on the model class. This is notably the case for nonlinear Independent Component Analysis, in which the LVM maps statistically independent variables to observations via a deterministic nonlinear function. Several families of spurious solutions fitting perfectly the data, but that do not correspond to the ground truth factors can be constructed in generic settings. However, recent work suggests that constraining the function class of such models may promote identifiability. Specifically, function classes with constraints on their partial derivatives, gathered in the Jacobian matrix, have been proposed, such as orthogonal coordinate transformations (OCT), which impose orthogonality of the Jacobian columns. In the present work, we prove that a subclass of these transformations, conformal maps, is identifiable and provide novel theoretical results suggesting that OCTs have properties that prevent families of spurious solutions to spoil identifiability in a generic setting.Comment: 43 page

    Proceedings of the Workshop on Change of Representation and Problem Reformulation

    Get PDF
    The proceedings of the third Workshop on Change of representation and Problem Reformulation is presented. In contrast to the first two workshops, this workshop was focused on analytic or knowledge-based approaches, as opposed to statistical or empirical approaches called 'constructive induction'. The organizing committee believes that there is a potential for combining analytic and inductive approaches at a future date. However, it became apparent at the previous two workshops that the communities pursuing these different approaches are currently interested in largely non-overlapping issues. The constructive induction community has been holding its own workshops, principally in conjunction with the machine learning conference. While this workshop is more focused on analytic approaches, the organizing committee has made an effort to include more application domains. We have greatly expanded from the origins in the machine learning community. Participants in this workshop come from the full spectrum of AI application domains including planning, qualitative physics, software engineering, knowledge representation, and machine learning

    Essays on the nonlinear and nonstochastic nature of stock market data

    Get PDF
    The nature and structure of stock-market price dynamics is an area of ongoing and rigourous scientific debate. For almost three decades, most emphasis has been given on upholding the concepts of Market Efficiency and rational investment behaviour. Such an approach has favoured the development of numerous linear and nonlinear models mainly of stochastic foundations. Advances in mathematics have shown that nonlinear deterministic processes i.e. "chaos" can produce sequences that appear random to linear statistical techniques. Till recently, investment finance has been a science based on linearity and stochasticity. Hence it is important that studies of Market Efficiency include investigations of chaotic determinism and power laws. As far as chaos is concerned, there are rather mixed or inconclusive research results, prone with controversy. This inconclusiveness is attributed to two things: the nature of stock market time series, which are highly volatile and contaminated with a substantial amount of noise of largely unknown structure, and the lack of appropriate robust statistical testing procedures. In order to overcome such difficulties, within this thesis it is shown empirically and for the first time how one can combine novel techniques from recent chaotic and signal analysis literature, under a univariate time series analysis framework. Three basic methodologies are investigated: Recurrence analysis, Surrogate Data and Wavelet transforms. Recurrence Analysis is used to reveal qualitative and quantitative evidence of nonlinearity and nonstochasticity for a number of stock markets. It is then demonstrated how Surrogate Data, under a statistical hypothesis testing framework, can be simulated to provide similar evidence. Finally, it is shown how wavelet transforms can be applied in order to reveal various salient features of the market data and provide a platform for nonparametric regression and denoising. The results indicate that without the invocation of any parametric model-based assumptions, one can easily deduce that there is more to linearity and stochastic randomness in the data. Moreover, substantial evidence of recurrent patterns and aperiodicities is discovered which can be attributed to chaotic dynamics. These results are therefore very consistent with existing research indicating some types of nonlinear dependence in financial data. Concluding, the value of this thesis lies in its contribution to the overall evidence on Market Efficiency and chaotic determinism in financial markets. The main implication here is that the theory of equilibrium pricing in financial markets may need reconsideration in order to accommodate for the structures revealed

    Towards Video Transformers for Automatic Human Analysis

    Full text link
    [eng] With the aim of creating artificial systems capable of mirroring the nuanced understanding and interpretative powers inherent to human cognition, this thesis embarks on an exploration of the intersection between human analysis and Video Transformers. The objective is to harness the potential of Transformers, a promising architectural paradigm, to comprehend the intricacies of human interaction, thus paving the way for the development of empathetic and context-aware intelligent systems. In order to do so, we explore the whole Computer Vision pipeline, from data gathering, to deeply analyzing recent developments, through model design and experimentation. Central to this study is the creation of UDIVA, an expansive multi-modal, multi-view dataset capturing dyadic face-to-face human interactions. Comprising 147 participants across 188 sessions, UDIVA integrates audio-visual recordings, heart-rate measurements, personality assessments, socio- demographic metadata, and conversational transcripts, establishing itself as the largest dataset for dyadic human interaction analysis up to this date. This dataset provides a rich context for probing the capabilities of Transformers within complex environments. In order to validate its utility, as well as to elucidate Transformers' ability to assimilate diverse contextual cues, we focus on addressing the challenge of personality regression within interaction scenarios. We first adapt an existing Video Transformer to handle multiple contextual sources and conduct rigorous experimentation. We empirically observe a progressive enhancement in model performance as more context is added, reinforcing the potential of Transformers to decode intricate human dynamics. Building upon these findings, the Dyadformer emerges as a novel architecture, adept at long-range modeling of dyadic interactions. By jointly modeling both participants in the interaction, as well as embedding multi- modal integration into the model itself, the Dyadformer surpasses the baseline and other concurrent approaches, underscoring Transformers' aptitude in deciphering multifaceted, noisy, and challenging tasks such as the analysis of human personality in interaction. Nonetheless, these experiments unveil the ubiquitous challenges when training Transformers, particularly in managing overfitting due to their demand for extensive datasets. Consequently, we conclude this thesis with a comprehensive investigation into Video Transformers, analyzing topics ranging from architectural designs and training strategies, to input embedding and tokenization, traversing through multi-modality and specific applications. Across these, we highlight trends which optimally harness spatio-temporal representations that handle video redundancy and high dimensionality. A culminating performance comparison is conducted in the realm of video action classification, spotlighting strategies that exhibit superior efficacy, even compared to traditional CNN-based methods.[cat] Aquesta tesi busca crear sistemes artificials que reflecteixin les habilitats de comprensió i interpretació humanes a través de l'ús de Transformers per a vídeo. L'objectiu és utilitzar aquestes arquitectures per comprendre millor la interacció humana i desenvolupar sistemes intel·ligents i conscients de l'entorn. Això implica explorar àmplies àrees de la Visió per Computador, des de la recopilació de dades fins a l'anàlisi de l'estat de l'art i la prova experimental d'aquests models. Una part essencial d'aquest estudi és la creació d'UDIVA, un ampli conjunt de dades multimodal i multivista que enregistra interaccions humanes cara a cara. Amb 147 participants i 188 sessions, UDIVA inclou contingut audiovisual, freqüència cardíaca, perfils de personalitat, dades sociodemogràfiques i transcripcions de les converses. És el conjunt de dades més gran conegut per a l'anàlisi de la interacció humana diàdica i proporciona un context ric per a l'estudi de les capacitats dels Transformers en entorns complexos. Per tal de validar la seva utilitat i les habilitats dels Transformers, ens centrem en la regressió de la personalitat. Inicialment, adaptem un Transformer de vídeo per integrar diverses fonts de context. Mitjançant experiments exhaustius, observem millores progressives en els resultats amb la inclusió de més context, confirmant la capacitat dels Transformers. Motivats per aquests resultats, desenvolupem el Dyadformer, una arquitectura per interaccions diàdiques de llarga duració. Aquesta nova arquitectura considera simultàniament els dos participants en la interacció i incorpora la multimodalitat en un sol model. El Dyadformer supera la nostra proposta inicial i altres treballs similars, destacant la capacitat dels Transformers per abordar tasques complexes. No obstant això, aquestos experiments revelen reptes d'entrenament dels Transformers, com el sobreajustament, per la seva necessitat de grans conjunts de dades. La tesi conclou amb una anàlisi profunda dels Transformers per a vídeo, incloent dissenys arquitectònics, estratègies d'entrenament, preprocessament de vídeos, tokenització i multimodalitat. S'identifiquen tendències per gestionar la redundància i alta dimensionalitat de vídeos i es realitza una comparació de rendiment en la classificació d'accions a vídeo, destacant estratègies d'eficàcia superior als mètodes tradicionals basats en convolucions

    The Media and the Academic Globalization Debate : Theoretical Analysis and Critique

    Get PDF
    This study offers a reconstruction and critical evaluation of globalization theory, a perspective that has been central for sociology and cultural studies in recent decades, from the viewpoint of media and communications. As the study shows, sociological and cultural globalization theorists rely heavily on arguments concerning media and communications, especially the so-called new information and communication technologies, in the construction of their frameworks. Together with deepening the understanding of globalization theory, the study gives new critical knowledge of the problematic consequences that follow from such strong investment in media and communications in contemporary theory. The book is divided into four parts. The first part presents the research problem, the approach and the theoretical contexts of the study. Followed by the introduction in Chapter 1, I identify the core elements of globalization theory in Chapter 2. At the heart of globalization theory is the claim that recent decades have witnessed massive changes in the spatio-temporal constitution of society, caused by new media and communications in particular, and that these changes necessitate the rethinking of the foundations of social theory as a whole. Chapter 3 introduces three paradigms of media research the political economy of media, cultural studies and medium theory the discussion of which will make it easier to understand the key issues and controversies that emerge in academic globalization theorists treatment of media and communications. The next two parts offer a close reading of four theorists whose works I use as entry points into academic debates on globalization. I argue that we can make sense of mainstream positions on globalization by dividing them into two paradigms: on the one hand, media-technological explanations of globalization and, on the other, cultural globalization theory. As examples of the former, I discuss the works of Manuel Castells (Chapter 4) and Scott Lash (Chapter 5). I maintain that their analyses of globalization processes are overtly media-centric and result in an unhistorical and uncritical understanding of social power in an era of capitalist globalization. A related evaluation of the second paradigm (cultural globalization theory), as exemplified by Arjun Appadurai and John Tomlinson, is presented in Chapter 6. I argue that due to their rejection of the importance of nation states and the notion of cultural imperialism for cultural analysis, and their replacement with a framework of media-generated deterritorializations and flows, these theorists underplay the importance of the neoliberalization of cultures throughout the world. The fourth part (Chapter 7) presents a central research finding of this study, namely that the media-centrism of globalization theory can be understood in the context of the emergence of neoliberalism. I find it problematic that at the same time when capitalist dynamics have been strengthened in social and cultural life, advocates of globalization theory have directed attention to media-technological changes and their sweeping socio-cultural consequences, instead of analyzing the powerful material forces that shape the society and the culture. I further argue that this shift serves not only analytical but also utopian functions, that is, the longing for a better world in times when such longing is otherwise considered impracticable.Tämän väitöstyön aiheena on akateeminen keskustelu globalisaatiosta, joka on noussut keskeiseksi teemaksi sosiologiassa ja kulttuurintutkimuksessa 1990-luvulta lähtien. Työ kohdistaa huomionsa siihen, millä tavoin mediaa ja viestintävälineitä koskevia väitteitä on käytetty osana globalisaatioteoriaa. Tutkimus osoittaa, että globalisaation teoreettista merkitystä korostavat sosiologit ja kulttuurintutkijat ovat tukeneet omia käsityksiään korostamalla median ja erityisesti niin sanottujen uusien viestintäteknologioiden aiheuttamia muutoksia yhteiskunnassa, taloudessa ja kulttuurissa. Väitöskirja syventää tietoa globalisaatioteoriasta ja tarjoaa kriittistä ymmärrystä niistä ongelmista, joita seuraa sen voimakkaasta media- ja viestintäkeskeisyydestä. Globalisaation suuresta merkityksestä johtuen työ avaa myös sitä, millä tavoin mediaa ja viestintävälineitä on lähestytty nykytutkimuksessa yleisemminkin. Väitöskirja jakautuu neljään osaan. Ensimmäisessä osassa esitellään tutkimusongelma, työn teoreettinen lähestymistapa sekä siihen liittyviä teoreettisia viitekehyksiä. Johdannon (luku 1) jälkeen tarkennan katseeni globalisaatioteorian avainkysymyksiin luvussa 2. Globalisaatioteorian ytimessä on ajatus siitä, että viime vuosikymmeninä yhteiskunnan tilallisessa ja ajallisessa rakentumisessa on tapahtunut valtavia muutoksia, ja että media ja viestintävälineet ovat niiden pääasiallisia aiheuttajia. Globalisaatioteoreetikot ovat esittäneet näiden muutosten olevan niin suuria, että ne johtavat koko yhteiskuntateorian perusteiden uudelleenarviointiin. Luku 3 esittelee kolme mediatutkimuksen paradigmaa median poliittisen taloustieteen, kulttuurintutkimuksen ja mediumteorian joiden avulla on mahdollista hahmottaa paremmin niitä väitteitä ja kiistakysymyksiä, jotka ovat nousseet esiin akateemisessa globalisaatiokeskustelussa. Seuraavissa kahdessa osassa analysoin ja arvioin yksityiskohtaisesti neljän globalisaatioteoreetikon väitteitä. Esitän, että voimme ymmärtää valtavirran globalisaatioteoriaa jakamalla sen kahteen paradigmaan: mediateknologiseen globalisaatioteoriaan ja kulttuuriseen globalisaatioteoriaan. Mediateknologisen globalisaatioteorian edustajina käsittelen Manuel Castellsin ja Scott Lashin töitä luvuissa 4 ja 5. Väitän, että heidän näkemyksensä ovat häiritsevän mediakeskeisiä ja että ne tuottavat epähistoriallista ja -kriittistä ymmärrystä yhteiskunnallisista valtasuhteista kapitalistisen globalisoitumisen aikakautena. Luvussa 6 käsittelen Arjun Appadurain ja John Tomlinsonin edustamaa kulttuurista globalisaatioteoriaa. Heidän lähtökohtanaan on yhtäältä se, että kansallisvaltiot ja kulttuuri-imperialismin nimellä tunnetut prosessit ovat nykyisin menettäneet merkityksensä, ja toisaalta se, että niiden sijaan tutkimuksessa pitäisi keskustella globaaleista kulttuurivirtauksista ja kulttuurin irtoamisesta kansallisista tai paikallisista yhteyksistään. Nämä väitteet ovat itsessään ongelmallisia ja johtavat myös sen väheksymiseen, millä tavoin kulttuuri ja media ovat viime vuosikymmeninä tulleet yhä enenevässä määrin osaksi kapitalistista talousjärjestelmää, eritoten neoliberalismin hegemonisuudesta johtuen. Työn viimeinen osa (luku 7) kokoaa yhteen aiemmin esitettyjä kritiikkejä. Niiden pohjalta esitän tutkimuksen keskeisen tuloksen, jonka mukaan globalisaatioteorian mediakeskeisyys voidaan ymmärtää osana neoliberalismin nousua. Pidän ongelmallisena sitä, että samaan aikaan kun kapitalistiset tendenssit yhteiskunnassa ja kulttuurissa ovat voimistuneet, globalisaatioteorian edustajat ovat suunnanneet huomionsa mediateknologisiin muutoksiin, ensin mainittujen kustannuksella. Väitän lopuksi, että tämä kehityskulku ei palvele niinkään analyyttisiä vaan ennen kaikkea utooppisia tarkoitusperiä: mediateknologisista muutoksista puhuminen tekee mahdolliseksi monenlaisten emansipatoristen väitteiden esittämisen aikana, jota leimaa ajatus siitä, ettei nykyiselle kapitalistiselle yhteiskuntajärjestykselle voida kuvitella vaihtoehtoja

    Towards faster numerical solution of Continuous Time Markov Chains stored by symbolic data structures

    Get PDF
    This work considers different aspects of model-based performance- and dependability analysis. This research area analyses systems (e.g. computer-, telecommunication- or production-systems) in order to quantify their performance and reliability. Such an analysis can be carried out already in the planning phase, without a physically existing system. All aspects treated in this work are based on finite state spaces (i.e. the models only have finitely many states) and a representation of the state graphs by Multi-Terminal Binary Decision Diagrams (MTBDDs). Currently, there are many tools that transform high-level model specifications (e.g. process algebra or Petri-Net) to low-level models (e.g. Markov chains). Markov chains can be represented by sparse matrices. For complex models very large state spaces may occur (this phenomenon is called state space explosion in the literature) and accordingly very large matrices representing the state graphs. The problem of building the model from the specification and storing the state graph can be regarded as solved: There are heuristics for compactly storing the state graph by MTBDD or Kronecker data structure and there are efficient algorithms for the model generation and functional analysis. For the quantitative analysis there are still problems due to the size of the underlying state space. This work provides some methods to alleviate the problems in case of MTBDD-based storage of the state graph. It is threefold: 1. For the generation of smaller state graphs in the model generation phase (which usually are easier to solve) a symbolic elimination algorithm is developed. 2. For the calculation of steady-state probabilities of Markov chains a multilevel algorithm is developed which allows for faster solutions. 3. For calculating the most probable paths in a state graph, the mean time to the first failure of a system and related measures, a path-based solver is developed

    A history of the concept of parameter in Generative Grammar

    Get PDF
    Questa tesi traccia la storia del concetto di parametro in Grammatica Generativa a partire dai primi sviluppi del modello a Principi e Parametri negli ultimi anni Settanta fino all\u2019avvento del programma Minimalista (Minimalist Program, MP), esaminando il modo nel quale questa nozione \ue8 stata implementata sia durante che successivamente a questa transizione. L\u2019analisi oggetto di questa tesi si sviluppa a partire dalla sistematizzazione della cosiddetta \u201cteoria standard\u201d della Grammatica Generativa, avvenuta in Aspects of the Theory of Syntax (1965), fino agli ultimi sviluppi del MP. Il Capitolo I offre una panoramica della protostoria del concetto di parametro ponendo particolare attenzione ai fattori, sia teorici che empirici, alla base della formulazione di questa nozione in Chomsky (1981). I fattori teorici sono identificati con la distinzione tra adeguatezza descrittiva ed esplicativa e con la soluzione proposta da Chomsky al problema della povert\ue0 dello stimolo, mentre il fattore empirico consiste nel risultato delle indagini pre-parametriche operate da Rizzi e Taraldsen, le quali gettarono nuova luce sulla sistematicit\ue0 della variazione linguistica. Nel Capitolo II sono esaminate le single formulazioni dei principali parametri proposti dalla Grammatica Generativa nel quadro della teoria della Reggenza e del Legamento (Government and Binding, GB) degli anni Ottanta. Sebbene i parametri in questione siano gli stessi che compaiono nella lista proposta da Rizzi (2014), nella prima parte di questo capitolo essi sono retrospettivamente classificati in base alla specifica propriet\ue0 sintattica alla quale farebbero riferimento secondo le correnti teorie minimaliste. Il Capitolo III si focalizza sul dibattito che, durante la prima decade degli anni Duemila, ebbe al suo centro proprio il concetto di parametro. Le prime due posizioni teoriche discusse sono l\u2019approccio microparametrico di Kayne (2000, 2005), il quale si basa sul presupposto che la variazione parametrica \ue8 localizzata nel lessico, e quello macroparametrico di Baker (2001, 2008), basato invece sull\u2019idea tradizionale che i parametri sono espressi sui principi. Questi due approcci teorici sono quindi confrontati con la critica di Newmeyer (2004, 2005), la quale ne evidenzia le carenze di carattere sia descrittivo che teorico. Questo capitolo si conclude con la presentazione del modello parametrico proposto da Roberts e Holmberg (2010), il quale supera le rispettive limitazioni dei modelli micro- e macro-parametrici combinando una prospettiva microparametrica basata sul lessico con l\u2019idea che la variazione parametrica emergerebbe dall\u2019interazione tra Grammatica Universale, dati linguistici di base e principi di terzo fattore (non specifici del linguaggio). I Capitoli IV and V tracciano un bilancio dei parametri della teoria GB che giocano tuttora un ruolo nella teoria generativa moderna. Il Capitolo IV esamina il parametro del soggetto nullo, il parametrio del movimento del verbo, il parametro della polisintesi e il parametro dell\u2019opposizione tra movimento vs permanenza in situ dei sintagmi wh-, mentre il Capitolo V \ue8 dedicato alla storia del parametro testa-complemento. Se da un lato il soggetto nullo, il movimento del verbo e il fenomeno della polisintesi possono essere spiegati tramite il modello di Roberts e Holmberg, dall\u2019altro si afferma che il movimento-wh e l\u2019ordine testa-complemento riguarderebbero, come considerato da Berwick e Chomsky (2011), l\u2019interfaccia articolativo-percettiva. Lo scenario che emerge da questa analisi sottolinea la duplice natura della variazione parametrica: sintattica e post-sintattica. Questa conclusione ci conduce ad una considerazione interessante relativamente alla dicotomia tra movimento di testa (X) e movimento di sintagma (XP): mentre il movimento di testa si osserverebbe unicamente nella sintassi in senso stretto, i sintagmi sarebbero invece soggetti a linearizzazione ad un livello post-sintattico.This thesis traces the history of the concept of parameter in Generative Grammar, from the first steps of the Principles and Parameters model in the late 1970s to the advent of the Minimalist Program (MP), examining how this notion has been implemented both during and after this transition. The analysis carried out in this dissertation starts from the systematization of the so-called \u201cstandard theory\u201d of Generative Grammar in Aspects of the Theory of Syntax (1965) until the last developments of the MP. Chapter I offers an overview of the protohistory of the concept of parameter by focusing on the factors, both theoretical and empirical, at the basis of the systematic formulation of this notion in Chomsky (1981). The theoretical factors are identified with the distinction between descriptive and explanatory adequacy and Chomsky\u2019s proposed solution to the so-called problem of the poverty of the stimulus. The empirical factor consists in the outcome of Rizzi\u2019s and Taraldsen\u2019s pre-parametric inquiries, which shed new light on the systematicity of linguistic variation. In Chapter II, I examine the individual formulation of the main parameters that were proposed in Generative Grammar within the Government-Binding (GB) Theory of the Eighties. While the parameters at issue are taken from the list that is proposed in Rizzi (2014), in the first part of the chapter they are retrospectively classified according to the specific syntactic property they would refer to in current minimalist theories. Chapter III focuses on the debate about the concept of parameter which took place during the first decade of the 21st century. The first two positions which are discussed are Kayne\u2019s (2000, 2005) microparametric approach, which draws from the idea that parametric variation is located in the lexicon, and Baker\u2019s (2001, 2008) macroparametric approach, which instead relies on the classical idea that parameters are expressed on principles. These two approaches are then confronted with Newmeyer\u2019s (2004, 2005) criticism, which points out their descriptive and theoretical flaws. This chapter ends with the presentation of the parametric model proposed by Roberts & Holmberg (2010), which overcomes the limitations of micro- and macro-parameters by combining a lexically-based, microparametric view of linguistic variation with the idea that parametric variation is an emergent property of the interaction of UG, primary linguistic data, and third-factor considerations. Chapters IV and V evaluate the classical parameters of the GB Theory which still play a role in current generative theory. Chapter IV reviews the null subject parameter, the V-to-T movement parameter, the polysynthesis parameter, and the overt vs covert whmovement parameter, while Chapter V is devoted to the history of the head-complement parameter. While on the one hand null subject, V-to-T, and polysynthesis can be reconciled with Roberts & Holmberg\u2019s theory, which is based on the assumption that the locus of parameters is the functional lexicon, on the other it is argued that wh-movement and head-directionality pertain to the A-P nterface, as envisioned by Berwick & Chomsky (2011). The picture emerging from this analysis highlights that the nature of parametric variation is twofold: syntactic and post-syntactic. This has an interesting consequence on the duality between head-movement and phrasal movement, as only in narrow syntax heads are observed to move, with XPs being linearized post-syntactically

    Multi-camera object segmentation in dynamically textured scenes using disparity contours

    Get PDF
    This thesis presents a stereo-based object segmentation system that combines the simplicity and efficiency of the background subtraction approach with the capacity of dealing with dynamic lighting and background texture and large textureless regions. The method proposed here does not rely on full stereo reconstruction or empirical parameter tuning, but employs disparity-based hypothesis verification to separate multiple objects at different depths.The proposed stereo-based segmentation system uses a pair of calibrated cameras with a small baseline and factors the segmentation problem into two stages: a well-understood offline stage and a novel online one. Based on the calibrated parameters, the offline stage models the 3D geometry of a background by constructing a complete disparity map. The online stage compares corresponding new frames synchronously captured by the two cameras according to the background disparity map in order to falsify the hypothesis that the scene contains only background. The resulting object boundary contours possess a number of useful features that can be exploited for object segmentation.Three different approaches to contour extraction and object segmentation were experimented with and their advantages and limitations analyzed. The system demonstrates its ability to extract multiple objects from a complex scene with near real-time performance. The algorithm also has the potential of providing precise object boundaries rather than just bounding boxes, and is extensible to perform 2D and 3D object tracking and online background update
    corecore