299 research outputs found

    Kajian Hukum Mengenai Tindak Pidana Pencemaran Nama Baik Yang Dilakukan Melalui Media Elektronik (Menurut UU No. 11 Tahun 2008)

    Get PDF
    Tujuan dilakukannya penelitian ini adalah untuk mengetahui bagaimana rumusan delik pencemaran nama baik serta ancaman hukumannya melalui sarana media elektronik sesuai Undang-Undang No. 11 Tahun 2008 dan bagaimana penerapannya dalam praktek perkara pidana. Dengan menggunakan metode penelitian yuridis normatif dapat disimpulkan, bahwa: 1. Perbuatan yang dilarang oleh undang ini berkaitan dengan infromasi elektronik adalah mendistribusikan, atau mentransimisikan atau membuat dapat diaksesnya informasi elektronik yang muatannya antara lain berisi penghinaan atau pencemaran nama baik. 2. Dalam penerapan kasus seperti Prita Mulyasari unsur kesengajaan seperti unsur delik harus dipandang secara lebih luas tidak hanya diperundang secara hitam putih melalui UU ITE dan KUHP saja akan tetapi harus komprehensif dan tidak porsial misalnya dari perspektif hubungan hukum antara pihak pelapor dan terlapor, perbuatan Prita ini bisa diniali sebagai bentuk keluhan konsumen terhadap pelayanan sebuah rumah sakit yang menurutnya kurang memuaskan

    Pottery Manufacture at a Neolithic Causewayed Enclosure near Hevringholm, East Jutland

    Get PDF
    Pottery Manufacture at a Neolithic Causewayed Enclosure near Hevringholm, East Jutlan

    Automatically Scaling Multi-Tenant Machine Learning

    Get PDF
    Generally, the present disclosure is directed to optimizing use of computing resources in a system. In particular, in some implementations, the systems and methods of the present disclosure can include or otherwise leverage one or more machine-learned models to predict task allocation for a job serving a plurality of machine-learned models based on current system state and queries per second (QPS) data for the plurality of models. Alternatively, the tasks can be allocated according to one or more rules (e.g., a new task is allocated to a job until the compute usage for the job falls below a scaling threshold). Thus, the systems and methods of the present disclosure are able to efficiently serve a mix of high-QPS and low-QPS machine-learned models at low latency with minimal waste of compute resources (e.g., CPU, GPU, TPU, etc.) and memory (e.g., RAM)

    Elastic multi-resolution model-serving to compute inferences

    Get PDF
    Machine-learning models are consuming an increasing fraction of the world\u27s computing resources. The cost of computing inferences with some machine-learning models is extremely high. Provisioning computing resources for peak performance, e.g., high availability and quality of service, entails the creation of headroom for traffic spikes (increases in demand) and preparing for the possibility of outages (decreases in capacity). Executing computer applications that utilize machine-learning models, also known as machine-learned models, can require significant capital and operational expenses. This disclosure describes techniques to optimize use of computing resources for a machine-learning model. Multi-resolution models and/or models with recurrence are utilized. These models can compute inferences to varying degrees of quality (resolution). The multi-resolution models are served in an elastic manner such that a model of a resolution that fits both the available computing resources and is utilized to compute inferences

    Cooling system early-stage design tool for naval applications

    Get PDF
    Thesis (Nav. E. and S.M.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 2011.Cataloged from PDF version of thesis.Includes bibliographical references (p. 65-66).This thesis utilizes concepts taken from the NAVSEA Design Practices and Criteria Manualfor Surface Ship Freshwater Systems and other references to create a Cooling System Design Tool (CSDT). With the development of new radars and combat system equipment on warships comes the increased demand for the means to remove the heat generated by these power-hungry systems. Whereas in the past, the relatively compact Chilled Water system could be tucked away where space was available, the higher demand for chilled water has resulted in a potentially exponential growth in size and weight of the components which make up this system; as a result, the design of the cooling systems must be considered earlier in the design process. The CSDT was developed to enable naval architects and engineers to better illustrate, early in the design process, the requirements and characteristics for the Chilled Water system components. Utilizing both Excel and Paramarine software, the CSDT rapidly creates a visual model of a Chilled Water system and conducts pump, damage, cost, weight, and volume analyses to assist in further development and design of the system. Several case studies were run to show the capability and flexibility of the tool, as well as how new electronic and mecahnical systems can affect the parameters of the Chilled Water system.by Ethan R. Fiedel.Nav.E.and S.M

    Identifying and analyzing the hiring process for the Department of Veterans Affairs, Veterans Health Administration

    Get PDF
    Thesis (S.M. in Engineering and Management)--Massachusetts Institute of Technology, Engineering Systems Division, System Design and Management Program, June 2011.Cataloged from PDF version of thesis.Includes bibliographical references (p. 93-94).This thesis utilizes ideas taken from different Systems Engineering modeling tools to model the hiring process for the U.S. Department of Veterans Affairs (VA), Veterans Health Administration (VHA). This model is a guide for understanding the current state of the process and shows that inadequate Position Descriptions (PD) are not the primary reason why the VA cannot meet the 80 day window set forth by U.S. Office of Personnel Management (OPM). Additionally, the model can assist in identifying potential areas for reducing the overall process timeline and be used as a training tool to illustrate how the hiring process progresses. Existing models only show major steps in the process which can mask sources of delay, communication issues, and confusion. The developed model delves deeper into those major steps, showing individual sub-steps, accountability, timelines, and data flows. Data for the model was obtained by direct observations, interviews, analysis from data collected by the VHA, and documents released by the VA and OPM. When fully developed, the model allowed for the conduction of case studies on three different positions within VHA; these case studies illustrate that the inability to meet the hiring process timeline is only partially due to issues with the PD and that other factors (namely internal reviews and classification delays) have a significantly greater effect in the resulting timeline. The model itself and recommendations provided, such as establishing priorities, targeting specific areas of time delays, improving communication, and generating and providing access to knowledge can help the VHA to achieve a streamlined and compressed timeline.by Ethan R. Fiedel.S.M.in Engineering and Managemen

    Understanding HTML with Large Language Models

    Full text link
    Large language models (LLMs) have shown exceptional performance on a variety of natural language tasks. Yet, their capabilities for HTML understanding -- i.e., parsing the raw HTML of a webpage, with applications to automation of web-based tasks, crawling, and browser-assisted retrieval -- have not been fully explored. We contribute HTML understanding models (fine-tuned LLMs) and an in-depth analysis of their capabilities under three tasks: (i) Semantic Classification of HTML elements, (ii) Description Generation for HTML inputs, and (iii) Autonomous Web Navigation of HTML pages. While previous work has developed dedicated architectures and training procedures for HTML understanding, we show that LLMs pretrained on standard natural language corpora transfer remarkably well to HTML understanding tasks. For instance, fine-tuned LLMs are 12% more accurate at semantic classification compared to models trained exclusively on the task dataset. Moreover, when fine-tuned on data from the MiniWoB benchmark, LLMs successfully complete 50% more tasks using 192x less data compared to the previous best supervised model. Out of the LLMs we evaluate, we show evidence that T5-based models are ideal due to their bidirectional encoder-decoder architecture. To promote further research on LLMs for HTML understanding, we create and open-source a large-scale HTML dataset distilled and auto-labeled from CommonCrawl
    • …
    corecore