244 research outputs found

    Manipulation of uncooperative rotating objects in space with a modular self-reconfigurable robot

    Get PDF
    The following thesis is a feasibility study for the controlled deployment of robotic scaffolding structures on randomly tumbling objects with low-magnitude gravitational field for use in space applications such as space debris removal, spacecraft maintenance and asteroids capture and mining. The proposed solution is based on the novel use of self-reconfigurable modular robots performing deployments on randomly tumbling objects as a task-driven reconfiguration or manipulation through reconfiguration. The robot design focused on its control strategy which used a decentralised modular controller with two levels. One high-level behaviour-based component and one low-level component generating commands via a constrained optimisation using either a linear or a non-linear model predictive control approach and constituting a novel control method for rotating objects via angular momentum exchanges and mass distribution changes. The controller design relied on modelling the robot modules and the object as a rotating discretised deformable continuum whose rigid part, the object, was an ellipsoid. All parameters were normalised when possible and disturbances, sensors and actuator errors were modelled respectively as biased white noises and coloured noises. The correctness of the overall control algorithm was ensured. The main objective of the MPC controllers was to control the deployment of a module from the tip of the spinning axis to the plane containing the object’s centre of mass while coiling around the spinning axis and ensuring the object’s rotational state tracked a reference state. Simulations showed that the nonlinear MPC controller should be preferred over a linear one and that, for a mass ratio of the object’s to the module’s equal to 10000, the nonlinear MPC controller is best suited to stability maintenance and meets the deployment requirement, suggesting that the proposed solution would be acceptable for medium-size objects such as asteroids

    On the Computation of the Gaussian Rate-Distortion-Perception Function

    Full text link
    In this paper, we study the computation of the rate-distortion-perception function (RDPF) for a multivariate Gaussian source under mean squared error (MSE) distortion and, respectively, Kullback-Leibler divergence, geometric Jensen-Shannon divergence, squared Hellinger distance, and squared Wasserstein-2 distance perception metrics. To this end, we first characterize the analytical bounds of the scalar Gaussian RDPF for the aforementioned divergence functions, also providing the RDPF-achieving forward "test-channel" realization. Focusing on the multivariate case, we establish that, for tensorizable distortion and perception metrics, the optimal solution resides on the vector space spanned by the eigenvector of the source covariance matrix. Consequently, the multivariate optimization problem can be expressed as a function of the scalar Gaussian RDPFs of the source marginals, constrained by global distortion and perception levels. Leveraging this characterization, we design an alternating minimization scheme based on the block nonlinear Gauss-Seidel method, which optimally solves the problem while identifying the Gaussian RDPF-achieving realization. Furthermore, the associated algorithmic embodiment is provided, as well as the convergence and the rate of convergence characterization. Lastly, for the "perfect realism" regime, the analytical solution for the multivariate Gaussian RDPF is obtained. We corroborate our results with numerical simulations and draw connections to existing results.Comment: This paper has been submitted for journal publicatio

    The conception of New Venture Ideas by novice entrepreneurs: A question of nature or nurture?

    Get PDF
    This research aims to further understanding around the cognitive mechanisms lying behind the generation of entrepreneurial New Venture Ideas (NVIs). It assesses the extent to which this competency is innate or one which is capable of being proactively developed. This has particular salience in the context of novice entrepreneurs, a group lacking the knowledge corridors and cognitive frameworks of their serial or portfolio counterparts. Innovative in nature, NVIs represent the first candidate concepts for new means-end relationships. Existing as cognitive products at the very start of the entrepreneurial journey, significant academic attention has focused on the cognitive micro-foundations that influence their conception. Nonetheless, notable gaps in this body of work remain, not least in how different cognitive antecedents impact upon NVI quality. This thesis looks at these issues through three independent but inter-related studies. The first undertakes a systematic literature review of the existing empirical research to elucidate the extent, and associated transmission methods, through which entrepreneurship education and training (EET) supports opportunity identification. The second takes a quantitative approach to observe how an individual’s innate cognitive capabilities, notably those aspects of intelligence related to executive functioning, explain significant inter-person performance differences when it comes to entrepreneurial ideation. The third adopts an experimental methodology, to assess the extent to which the use of cognitive heuristics, in this case analogical reasoning, impacts on performance outcomes in the conception of NVIs, and the extent to which it can be supported. Collectively this study finds that EET interventions, innate cognitive capabilities, and cognitive heuristics all contribute to NVI quality. It highlights the potency of nurturing interventions but simultaneously illustrates their limitations. With different cognitive antecedents shown to exude varying degrees of malleability, this research has relevance to both the structure, and expectations, of EET programmes dedicated to the ‘fuzzy front’ end of entrepreneurship

    Quantum soft-covering lemma with applications to rate-distortion coding, resolvability and identification via quantum channels

    Full text link
    We propose a quantum soft-covering problem for a given general quantum channel and one of its output states, which consists in finding the minimum rank of an input state needed to approximate the given channel output. We then prove a one-shot quantum covering lemma in terms of smooth min-entropies by leveraging decoupling techniques from quantum Shannon theory. This covering result is shown to be equivalent to a coding theorem for rate distortion under a posterior (reverse) channel distortion criterion [Atif, Sohail, Pradhan, arXiv:2302.00625]. Both one-shot results directly yield corollaries about the i.i.d. asymptotics, in terms of the coherent information of the channel. The power of our quantum covering lemma is demonstrated by two additional applications: first, we formulate a quantum channel resolvability problem, and provide one-shot as well as asymptotic upper and lower bounds. Secondly, we provide new upper bounds on the unrestricted and simultaneous identification capacities of quantum channels, in particular separating for the first time the simultaneous identification capacity from the unrestricted one, proving a long-standing conjecture of the last author.Comment: 29 pages, 3 figures; v2 fixes an error in Definition 6.1 and various typos and minor issues throughou

    Atmospheric Science at NASA

    Get PDF
    Honorable Mention, 2008 ASLI Choice Awards. Atmospheric Science Librarians InternationalThis book offers an informed and revealing account of NASA’s involvement in the scientific understanding of the Earth’s atmosphere. Since the nineteenth century, scientists have attempted to understand the complex processes of the Earth’s atmosphere and the weather created within it. This effort has evolved with the development of new technologies—from the first instrument-equipped weather balloons to multibillion-dollar meteorological satellite and planetary science programs. Erik M. Conway chronicles the history of atmospheric science at NASA, tracing the story from its beginnings in 1958, the International Geophysical Year, through to the present, focusing on NASA’s programs and research in meteorology, stratospheric ozone depletion, and planetary climates and global warming. But the story is not only a scientific one. NASA’s researchers operated within an often politically contentious environment. Although environmental issues garnered strong public and political support in the 1970s, the following decades saw increased opposition to environmentalism as a threat to free market capitalism. Atmospheric Science at NASA critically examines this politically controversial science, dissecting the often convoluted roles, motives, and relationships of the various institutional actors involved—among them NASA, congressional appropriation committees, government weather and climate bureaus, and the military

    Generalized Random Gilbert-Varshamov Codes: Typical Error Exponent and Concentration Properties

    Get PDF
    We find the exact typical error exponent of constant composition generalized random Gilbert-Varshamov (RGV) codes over DMCs channels with generalized likelihood decoding. We show that the typical error exponent of the RGV ensemble is equal to the expurgated error exponent, provided that the RGV codebook parameters are chosen appropriately. We also prove that the random coding exponent converges in probability to the typical error exponent, and the corresponding non-asymptotic concentration rates are derived. Our results show that the decay rate of the lower tail is exponential while that of the upper tail is double exponential above the expurgated error exponent. The explicit dependence of the decay rates on the RGV distance functions is characterized.Comment: 60 pages, 2 figure

    Assuming Data Integrity and Empirical Evidence to The Contrary

    Get PDF
    Background: Not all respondents to surveys apply their minds or understand the posed questions, and as such provide answers which lack coherence, and this threatens the integrity of the research. Casual inspection and limited research of the 10-item Big Five Inventory (BFI-10), included in the dataset of the World Values Survey (WVS), suggested that random responses may be common. Objective: To specify the percentage of cases in the BRI-10 which include incoherent or contradictory responses and to test the extent to which the removal of these cases will improve the quality of the dataset. Method: The WVS data on the BFI-10, measuring the Big Five Personality (B5P), in South Africa (N=3 531), was used. Incoherent or contradictory responses were removed. Then the cases from the cleaned-up dataset were analysed for their theoretical validity. Results: Only 1 612 (45.7%) cases were identified as not including incoherent or contradictory responses. The cleaned-up data did not mirror the B5P- structure, as was envisaged. The test for common method bias was negative. Conclusion: In most cases the responses were incoherent. Cleaning up the data did not improve the psychometric properties of the BFI-10. This raises concerns about the quality of the WVS data, the BFI-10, and the universality of B5P-theory. Given these results, it would be unwise to use the BFI-10 in South Africa. Researchers are alerted to do a proper assessment of the psychometric properties of instruments before they use it, particularly in a cross-cultural setting

    Understanding the influence of port community relationships on port community performance – a social capital perspective

    Get PDF
    This research investigates the influence of port community relationships on port community performance through the lens of social capital. While port performance research has traditionally focused on the micro or macro level, this study explores port performance at the meso level and suggests the terminology of port community performance in acknowledgement of the contributions and relevance the interactions of port community members have on the focal port’s performance. Since this type of investigation is a novel approach within the field of port performance research, this study addresses this gap by employing social capital theory to the context of Scottish trust ports. In detail, this study adopts Nahapiet and Ghoshal’s (1998) conceptualisation of social capital and further incorporates more recent findings of Hartmann and Herb (2015) of social capital’s influence on performance in triadic relationship settings as the latter allows for the suitable conceptualisation of the triadic port community setting between port authority, cargo owners and port service providers. As their performance is influenced by the quality of their relationships and subsequent interactions, the context of Scottish trust ports lends itself to extend social capital theory to develop an understanding of the formers’ influence on the performance of a port. This project employed a multiple-case study design. Two Scottish trust ports were purposively selected in line with a set of established criteria which are shared across the sample of suitable ports for analysis which allows for the synthesis of cases. As part of the data collection, a total of 30 semi-structured interviews were conducted with 30 representatives of the three port stakeholder groups of port authority, cargo owners and port service providers. The data gathered by the means of interviews is further enriched by participant observations, informal off the record exchanges and field notes. This project is underpinned by an interpretivist perspective. This study contributes to practice by identifying how facets of social capital such as trust, shared values, or norms in port community relationships positively influence port community performance which is of particular value for smaller sized ports with diverse cargo portfolios. The theoretical contribution of this study is twofold as it highlights how the extended setting of focal relationships in the port community can influence the manifestation of the dark side of social capital. Furthermore, it adds to the body of social capital theory by delineating how existing levels of social capital aligned with one of its dimensions can facilitate the accumulation of facets attributed to the other dimensions

    Private information retrieval and function computation for noncolluding coded databases

    Get PDF
    The rapid development of information and communication technologies has motivated many data-centric paradigms such as big data and cloud computing. The resulting paradigmatic shift to cloud/network-centric applications and the accessibility of information over public networking platforms has brought information privacy to the focal point of current research challenges. Motivated by the emerging privacy concerns, the problem of private information retrieval (PIR), a standard problem of information privacy that originated in theoretical computer science, has recently attracted much attention in the information theory and coding communities. The goal of PIR is to allow a user to download a message from a dataset stored on multiple (public) databases without revealing the identity of the message to the databases and with the minimum communication cost. Thus, the primary performance metric for a PIR scheme is the PIR rate, which is defined as the ratio between the size of the desired message and the total amount of downloaded information. The first part of this dissertation focuses on a generalization of the PIR problem known as private computation (PC) from distributed storage system (DSS). In PC, a user wishes to compute a function of f variables (or messages) stored in n noncolluding coded databases, i.e., databases storing data encoded with an [n, k] linear storage code, while revealing no information about the desired function to the databases. Here, colluding databases refers to databases that communicate with each other in order to deduce the identity of the computed function. First, the problem of private linear computation (PLC) for linearly encoded DSS is considered. In PLC, a user wishes to privately compute a linear combination over the f messages. For the PLC problem, the PLC capacity, i.e., the maximum achievable PLC rate, is characterized. Next, the problem of private polynomial computation (PPC) for linearly encoded DSS is considered. In PPC, a user wishes to privately compute a multivariate polynomial of degree at most g over f messages. For the PPC problem an outer bound on the PPC rate is derived, and two novel PPC schemes are constructed. The first scheme considers Reed-Solomon coded databases with Lagrange encoding and leverages ideas from recently proposed star-product PIR and Lagrange coded computation. The second scheme considers databases coded with systematic Lagrange encoding. Both schemes yield improved rates compared to known PPC schemes. Finally, the general problem of PC for arbitrary nonlinear functions from a replicated DSS is considered. For this problem, upper and lower bounds on the achievable PC rate are derived and compared. In the second part of this dissertation, a new variant of the PIR problem, denoted as pliable private information retrieval (PPIR) is formulated. In PPIR, the user is pliable, i.e., interested in any message from a desired subset of the available dataset. In the considered setup, f messages are replicated in n noncolluding databases and classified into F classes. The user wishes to retrieve any one or more messages from multiple desired classes, while revealing no information about the identity of the desired classes to the databases. This problem is termed as multi-message PPIR (M-PPIR), and the single-message PPIR (PPIR) problem is introduced as an elementary special case of M-PPIR. In PPIR, the user wishes to retrieve any one message from one desired class. For the two considered scenarios, outer bounds on the M-PPIR rate are derived for arbitrary number of databases. Next, achievable schemes are designed for n replicated databases and arbitrary n. Interestingly, the capacity of PPIR, i.e., the maximum achievable PPIR rate, is shown to match the capacity of PIR from n replicated databases storing F messages. A similar insight is shown to hold for the general case of M-PPIR

    Information Theory and Machine Learning

    Get PDF
    The recent successes of machine learning, especially regarding systems based on deep neural networks, have encouraged further research activities and raised a new set of challenges in understanding and designing complex machine learning algorithms. New applications require learning algorithms to be distributed, have transferable learning results, use computation resources efficiently, convergence quickly on online settings, have performance guarantees, satisfy fairness or privacy constraints, incorporate domain knowledge on model structures, etc. A new wave of developments in statistical learning theory and information theory has set out to address these challenges. This Special Issue, "Machine Learning and Information Theory", aims to collect recent results in this direction reflecting a diverse spectrum of visions and efforts to extend conventional theories and develop analysis tools for these complex machine learning systems
    • …
    corecore