38,309 research outputs found

    YeƟilçam Film Posters of the 60s and 70s: Representing Romance

    Get PDF
    Cinema and movie poster have the most powerful potential of establishing the ethos and mythology of people, and both assume an audience on the premises of a cultural context within which they are produced. So a movie poster contributing to mass communication through designed visual means contains a lot of information about film industry, evolution of design, craftsmanship, and the taste of the artists and society as well as standing for a strong evidence of its time. The decade 1965-1975 is important for Turkish cinema for being the golden years of this industry. Besides all the good memories associated with these posters, they indeed involve the history of Turkish cinema, graphic design as well as Turkish society. The great deal of labour involved, resulting from the poor resources of the period; design elements, typography, composition, the mistakes; all reflect the characteristics of then-YeƟilçam. Going over an archive of posters and with the help of interviews with designers, cinema historians and labourers, the article focuses on the relationship of design with the industry, technology and society. The YeƟilçam movie posters are analyzed through the methods of iconography as well as reception theory. And therefore as well as exploring their graphic and visual characteristics they would be placed in the context of the social life, the conditions of the movie industry in Turkey, the character of YeƟilçam Melodrama and evolution of Turkish movie poster. After a detailed research and observation, four categories are revealed as the most frequently seen types among the Turkish melodrama posters, according to the kind of images in relation to specific themes and concepts and the design schemas they use. These are “star posters”, “beefcake posters”, “phallic woman posters”, and “posters of movement”. The paper goes in detail how these categories are formed and their significance and their relation to the creation of the concept of romance. Keywords: YeƟilçam melodrama; film poster; graphic design; romance</p

    Spherical collapse of supermassive stars: neutrino emission and gamma-ray bursts

    Get PDF
    We present the results of numerical simulations of the spherically symmetric gravitational collapse of supermassive stars (SMS). The collapse is studied using a general relativistic hydrodynamics code. The coupled system of Einstein and fluid equations is solved employing observer time coordinates, by foliating the spacetime by means of outgoing null hypersurfaces. The code contains an equation of state which includes effects due to radiation, electrons and baryons, and detailed microphysics to account for electron-positron pairs. In addition energy losses by thermal neutrino emission are included. We are able to follow the collapse of SMS from the onset of instability up to the point of black hole formation. Several SMS with masses in the range 5×105M⊙−109M⊙5\times 10^5 M_{\odot}- 10^9 M_{\odot} are simulated. In all models an apparent horizon forms initially, enclosing the innermost 25% of the stellar mass. From the computed neutrino luminosities, estimates of the energy deposition by ΜΜˉ\nu\bar{\nu}-annihilation are obtained. Only a small fraction of this energy is deposited near the surface of the star, where, as proposed recently by Fuller & Shi (1998), it could cause the ultrarelativistic flow believed to be responsible for Îł\gamma-ray bursts. Our simulations show that for collapsing SMS with masses larger than 5×105M⊙5\times 10^5 M_{\odot} the energy deposition is at least two orders of magnitude too small to explain the energetics of observed long-duration bursts at cosmological redshifts. In addition, in the absence of rotational effects the energy is deposited in a region containing most of the stellar mass. Therefore relativistic ejection of matter is impossible.Comment: 13 pages, 11 figures, submitted to A&

    Business strategy and firm performance: the British corporate economy, 1949-1984

    Get PDF
    There has been considerable and ongoing debate about the performance of the British economy since 1945. Empirical studies have concentrated on aggregate or industry level indicators. Few have examined individual firms’ financial performance. This study takes a sample of c.3000 firms in 19 industries and identifies Britain’s best performing companies over a period of 35 years. Successful companies are defined as a) those that survive as independent entities, b) that outperform peer group average return to capital for that industry, and c) that outperform other firms in the economy according to return on capital relative to industry average. Results are presented as league tables of success and some tentative explanations offered concerning the common strategies of successful firms. A broader research agenda for British business history is suggested

    Using Synthetic Data to Train Neural Networks is Model-Based Reasoning

    Full text link
    We draw a formal connection between using synthetic training data to optimize neural network parameters and approximate, Bayesian, model-based reasoning. In particular, training a neural network using synthetic data can be viewed as learning a proposal distribution generator for approximate inference in the synthetic-data generative model. We demonstrate this connection in a recognition task where we develop a novel Captcha-breaking architecture and train it using synthetic data, demonstrating both state-of-the-art performance and a way of computing task-specific posterior uncertainty. Using a neural network trained this way, we also demonstrate successful breaking of real-world Captchas currently used by Facebook and Wikipedia. Reasoning from these empirical results and drawing connections with Bayesian modeling, we discuss the robustness of synthetic data results and suggest important considerations for ensuring good neural network generalization when training with synthetic data.Comment: 8 pages, 4 figure

    Overcoming data scarcity of Twitter: using tweets as bootstrap with application to autism-related topic content analysis

    Full text link
    Notwithstanding recent work which has demonstrated the potential of using Twitter messages for content-specific data mining and analysis, the depth of such analysis is inherently limited by the scarcity of data imposed by the 140 character tweet limit. In this paper we describe a novel approach for targeted knowledge exploration which uses tweet content analysis as a preliminary step. This step is used to bootstrap more sophisticated data collection from directly related but much richer content sources. In particular we demonstrate that valuable information can be collected by following URLs included in tweets. We automatically extract content from the corresponding web pages and treating each web page as a document linked to the original tweet show how a temporal topic model based on a hierarchical Dirichlet process can be used to track the evolution of a complex topic structure of a Twitter community. Using autism-related tweets we demonstrate that our method is capable of capturing a much more meaningful picture of information exchange than user-chosen hashtags.Comment: IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining, 201

    Transfer Learning for OCRopus Model Training on Early Printed Books

    Full text link
    A method is presented that significantly reduces the character error rates for OCR text obtained from OCRopus models trained on early printed books when only small amounts of diplomatic transcriptions are available. This is achieved by building from already existing models during training instead of starting from scratch. To overcome the discrepancies between the set of characters of the pretrained model and the additional ground truth the OCRopus code is adapted to allow for alphabet expansion or reduction. The character set is now capable of flexibly adding and deleting characters from the pretrained alphabet when an existing model is loaded. For our experiments we use a self-trained mixed model on early Latin prints and the two standard OCRopus models on modern English and German Fraktur texts. The evaluation on seven early printed books showed that training from the Latin mixed model reduces the average amount of errors by 43% and 26%, respectively compared to training from scratch with 60 and 150 lines of ground truth, respectively. Furthermore, it is shown that even building from mixed models trained on data unrelated to the newly added training and test data can lead to significantly improved recognition results

    Bridging the gap: building better tools for game development

    Get PDF
    The following thesis is about questioning how we design game making tools, and how developers may build easier tools to use. It is about the highlighting the inadequacies of current game making programs as well as introducing Goal-Oriented Design as a possible solution. It is also about the processes of digital product development, and reflecting on the necessity for both design and development methods to work cohesively for meaningful results. Interaction Design is in essence the abstracting of key relations that matter to the contextual environment. The result of attempting to tie the Interaction Design principles, Game Design issues together with Software Development practices has led to the production of the User-Centred game engine, PlayBoard
    • 

    corecore