39 research outputs found

    Hierarchy Based Construction of Signature Matrices for Simplified Decoding in Overloaded CDMA

    Get PDF
    The overloaded CDMA system, as the solution to the capacity limit of its conventional counterpart, has drawn frequent interest of the researchers in the past. While there exists numerous proposals on the construction of uniquely decodable (UD) signature matrices for overloaded CDMA system with very high value of overloading factor, most of them lag the efficient multiuser detector (MUD) for noisy transmission. Here, by efficient, we imply the MUD to have acceptable BER performance and simplified in design. Whereas the lack of efficiency of several MUDs is primarily due to the impact of excess level of multiple access interference (MAI) because of the rise in the number of active users, its random nature prohibits its accurate estimation and elimination. Under such constraints, if the signature matrices can be intelligently constructed so as to generate a defined and controlled pattern (hierarchy) of MAI so that the designed MUD will exploit the knowledge of this hierarchy to remove the MAI completely and attain better error performance at much lower cost of complexity. We consider this as the motivation for research in this thesis. First, we propose the ternary signature matrix with orthogonal subsets (TSMOS), where the matrix with index-k comprises of k orthogonal subsets with each having different number signatures, and all subsets besides the first (largest) one are of ternary type. The correlation (interference) pattern among the signatures is mapped into a twin tree hierarchy, which is further leveraged to design a simplified MUD using the linear decoding blocks like matched filter (MF) to provide errorfree and better error performance for noiseless and noisy transmission respectively. Next, we generalize the construction of TSMOS to multiple structures i.e.; Type I, Type II, Type III and mixed versions and reveal the complementary feature of 50% signatures of the largest (binary) subset that further results in their optimality. Further, we propose the non-ternary version of SMOS (called as 2k-SMOS), where the binary alphabets in each of the k subsets are different from each other. With vii no complementary feature, 50% signatures of its largest subset are also found to be optimal. The superiority of 2k-SMOS over TSMOS is also verified for an overloading capacity of 150%. Next, we propose and discuss the hybrid SMOS (HSMOS), where the subsets from TSMOS and 2k-SMOS are used as the constituents to produce multiple SMOS structures, of which TSMOS and 2k-SMOS are treated as the special cases. For better understanding of the features of the whole family of SMOS (with an overloading capacity of 200%), the gradual change in the twin tree hierarchy and BER performance of the left and right child of the individual subsets are studied. Similar to SMOS, we also introduce the hierarchy based low density signature (HLDS) matrix, where any UD matrix satisfying particular criterion can be considered as the basis set. For hadamard matrix as the basis set, we design a MUD that uses the MF to implement the decision vector search (DVS) algorithm, which is meant to exploit the advantageous hierarchy of constellation of the transmitted vector to offer errorfree decoding. For noisy channel, the marginal degradation in the level of BER of the MUD (DVS) as compared to the optimum joint maximum likelihood decoder (MLD) is worthy to be overlooked when compared with the significant gain achieved in terms of complexity. For the smallest dimension of the hadamard matrix as the basis, the MUD is further simplified to offer recovery using a comparison driven decision making algorithm, also known as comparison aided decoding (CAD). Despite simplicity, the error performance of the MUD (CAD) is observed to be very close to that of MUD (DVS)

    “INDUSTRIAL LEGISLATURES”: CONSENSUS STANDARDIZATION IN THE SECOND AND THIRD INDUSTRIAL REVOLUTIONS

    Get PDF
    Consensus standardization is a social process in which technical experts from public, private, and non-profit sectors negotiate the direction and shape of technological change. Scholars in a variety of disciplines have recognized the importance of consensus standards as alternatives to standards that arise through market mechanisms or standards mandated by regulators. Rather than treating the consensus method as some sort of timeless organizational form or ever-present alternative to markets or laws, I argue that consensus standardization is itself a product of history. In the first two chapters, I explain the origins and growth of consensus standards bodies between 1880 and 1930 as a reaction to and critique of the existing political economy of engineering. By considering the standardization process—instead of the internal dynamics of a particular firm or technology—as the primary category of analysis, I am able to emphasize the cooperative relations that sustained the American style of competitive managerial capitalism during the Second Industrial Revolution. In the remaining four chapters, I examine the processes of network architecture and standardization in the creation of four communications networks during the twentieth century: AT&T’s monopoly telephone network, the Internet, digital cellular telephone networks, and the World Wide Web. Each of these four networks embodied critiques—always implicit and frequently explicit—of preceding and competing networks. These critiques, visible both in the technological design of networks as well as in the institutional design of standard-setting bodies, reflected the political convictions of successive generations of engineers and network architects. The networks described in this dissertation were thus turning points in the century-long development of an organizational form. Seen as part of a common history, they tell the story of how consensus-based institutions became the dominant mode for setting standards in the Third Industrial Revolution, and created the foundational standards of the information infrastructures upon which a newly globalized economy and society—the Network Society—could grow

    Cyberterrorism as hybrid Threat: a comparison between the Iranian and Estonian case

    Get PDF
    Este trabalho de dissertação de mestrado pretende estudar o terrorismo cibernético no campo dos estudos de segurança e como se apresenta como uma ameaça híbrida na sociedade de hoje, no sistema internacional, e os seus impactos dentro destas fronteiras. Pretende abordar através de uma perspectiva institucionalista como os Estados entendem esta ameaça, que definições têm sobre este assunto, que efeitos este tipo de ameaça causa nas suas sociedades, e que meios de contra-resposta estes actores têm à sua disposição para garantir contra o ciber-terrorismo. Visa, também, expor as várias opiniões que o conceito carrega, como as suas definições e interpretações, e apresentar a conjuntura em que está inserido. A partir daí, os casos da Estónia e do Irão serão apresentados para desenvolver uma análise para compreender se existe uma diferença na resposta de dois Estados com contextos diferentes - um que é membro da OTAN e outro que não é - e como isto se apresenta na forma como abordam a questão, como reagem e como se protegem da mesma. No final, as diferenças e as razões que foram interpretadas serão apresentadas, assim como os possíveis resultados da questão da investigação.This thesis project intends to study cyber-terrorism within the field of security studies and how it presents itself as a hybrid threat in today's society, in the international system, and its impacts within these boundaries. It is intended to approach through an institutionalist perspective how states understand this threat, what definitions they hold on this subject, what effects this type of threat causes within their societies, and what means of counter-response these actors have at their disposal to ensure against cyber-terrorism. It aims, as well, to expose the various opinions that the concept carries, as its definitions and interpretations, and present the conjuncture that it is inserted. From there, the cases of Estonia and Iran will be presented to develop an analysis to understand if there is a difference in the response of two states with different contexts - one that is a NATO member and one that is not - and how this presents itself in how they approach the issue, how they react and how they protect themselves from it. In the end, the differences and the reasons that have been interpreted will be presented, as well as possible results of the research question

    Perceptual Bayesian inference in autism and schizophrenia

    Get PDF
    Recent theories in the field of computational psychiatry regard schizophrenia (SCZ) and autistic spectrum disorders (ASD) as impairments in Bayesian inference performed by the brain. In Bayesian terms, perception is a result of optimal real-time integration of sensory information (’likelihood’), which is intrinsically noisy and ambiguous, and prior expectations about the states of the world (‘prior’), which serve to disambiguate the meaning of the sensory information. Priors capture statistical regularities in the environment and are constantly updated to keep up with any changes in these regularities. The extent to which prior or likelihood dominate perception depends on the uncertainty with which they are represented, with less uncertainty resulting in more influence. Individuals with ASD and SCZ might show impairments in how they update their priors and/or how much uncertainty there is ascribed to prior and likelihood representations, leading to differences in inference. While this Bayesian account can be argued to be consistent with many previous experimental findings and symptoms of SCZ and ASD, recent experimental work inspired by these ideas has produced mixed results. In this work, we investigated possible Bayesian impairments in SCZ and ASD experimentally by addressing some of the methodological limitations of the previous work. Most notably, we used an experimental design that allows to disentangle and quantify separate influences of priors and likelihoods, and we tested both SCZ and ASD patient groups as well as autistic and schizotypy traits in the general population. We administered a visual motion perception task that rapidly induces prior expectations about the stimulus motion direction, leading to biases and occasional hallucinations that can be well described by a Bayesian model. In this task, autistic traits were found to be associated with reduced biases, which was underlied by more precise sensory representations, while the acquired priors were not affected by autistic traits. Patients with ASD, however, showed no evidence of increased sensory precision, while there also were no impairments in the acquisition of priors. We also found no effects in the acquisition of priors or sensory representations along schizotypy traits and in patients with SCZ. However, under conditions of high ambiguity SCZ patients were less likely to hallucinate the stimulus than controls. The second part of the thesis is focused on further exploratory analyses conducted using these same datasets. First, we investigated post-perceptual repulsion effects in our task and whether they were related to trait or group differences. We found clear evidence of repulsion from the cardinal directions. In addition to that, we found evidence for a repulsion from the central reference angle, which was randomly selected for each participant and which could only be inferred from the stimulus statistics. Furthermore, we found the repulsion from the central reference angle to be reduced along schizotypy traits. Interestingly, in both SCZ and ASD groups this repulsion was also found to be negligible. While these results are exploratory, they might point to a trans-diagnostic features of ASD and SCZ. Second, we investigated within-trial dynamics of evidence accumulation by constructing a Continuous Choice Drift Diffusion Model (CDM) – an extension of the classical binary choice drift diffusion model. The results of this model showed that increased sensory precision along AQ found in a Bayesian model was underlied by faster drift rates, while slower responses and reduced hallucinations in SCZ were explained by a larger decision threshold. In addition, this model provided a more complete characterization of the performance in this task (by including reaction times) and it serves to emphasize the importance of accounting for exposure to stimulus duration and judgement time in future studies investigating Bayesian inference. Together, this work provides novel experimental evidence that speaks to the hypothesis of impaired Bayesian inference in ASD and SCZ. Furthermore, the analysis of reference repulsion effects and within-trial dynamics provide additional insight related to SCZ and ASD differences that extend beyond the Bayesian framework

    Tracking of Animals Using Airborne Cameras

    Full text link

    The Largest Unethical Medical Experiment in Human History

    Get PDF
    This monograph describes the largest unethical medical experiment in human history: the implementation and operation of non-ionizing non-visible EMF radiation (hereafter called wireless radiation) infrastructure for communications, surveillance, weaponry, and other applications. It is unethical because it violates the key ethical medical experiment requirement for “informed consent” by the overwhelming majority of the participants. The monograph provides background on unethical medical research/experimentation, and frames the implementation of wireless radiation within that context. The monograph then identifies a wide spectrum of adverse effects of wireless radiation as reported in the premier biomedical literature for over seven decades. Even though many of these reported adverse effects are extremely severe, the true extent of their severity has been grossly underestimated. Most of the reported laboratory experiments that produced these effects are not reflective of the real-life environment in which wireless radiation operates. Many experiments do not include pulsing and modulation of the carrier signal, and most do not account for synergistic effects of other toxic stimuli acting in concert with the wireless radiation. These two additions greatly exacerbate the severity of the adverse effects from wireless radiation, and their neglect in current (and past) experimentation results in substantial under-estimation of the breadth and severity of adverse effects to be expected in a real-life situation. This lack of credible safety testing, combined with depriving the public of the opportunity to provide informed consent, contextualizes the wireless radiation infrastructure operation as an unethical medical experiment

    Strategic dynamic control for low volume potentially high mix modules

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science; and, Thesis (M.B.A.)--Massachusetts Institute of Technology, Sloan School of Management; in conjunction with the Leaders for Manufacturing Program at MIT, 2004.Includes bibliographical references (p. 118-120).The main goal of this project was to provide a case study on image engines for the purpose of developing a supply chain strategy. Initially, the current digital image engine manufacturing core competencies and respective capabilities were further defined and documented. Value chain maps were then developed for each line of business to assess Kodak's transition from analog to digital. The value chains clearly illustrate a lower reliance on consumables (including silver halide film and paper), and demonstrate an emerging capability in the digital equipment market. As image engine performance directly impacts the level of digital image quality, the study concludes that strategic dynamic control of these image engines is critical in order to maintain future competitive advantage.by Elana Ann Cohen.M.B.A.S.M
    corecore