898 research outputs found
The M&A process reviewed: the possible use of artificial intelligence and related technologies throughout the process
This paper explores the possible use of artificial intelligence throughout the M&A process.
New technologies enable new approaches, including in the M&A industry. Transactions are
becoming increasingly complex and artificial intelligence offers the potential to meet this
challenge. As great as the potential of artificial intelligence may be, the digital availability of
data, the very low innovation pressure of all market participants, and the transaction volume
are the limiting factors in implementation. The established research questions were verified
through expert interviews and a complementary questionnaire
Mapping AI Arguments in Journalism Studies
This study investigates and suggests typologies for examining Artificial
Intelligence (AI) within the domains of journalism and mass communication
research. We aim to elucidate the seven distinct subfields of AI, which
encompass machine learning, natural language processing (NLP), speech
recognition, expert systems, planning, scheduling, optimization, robotics, and
computer vision, through the provision of concrete examples and practical
applications. The primary objective is to devise a structured framework that
can help AI researchers in the field of journalism. By comprehending the
operational principles of each subfield, scholars can enhance their ability to
focus on a specific facet when analyzing a particular research topic
Robust Dialog Management Through A Context-centric Architecture
This dissertation presents and evaluates a method of managing spoken dialog interactions with a robust attention to fulfilling the human user’s goals in the presence of speech recognition limitations. Assistive speech-based embodied conversation agents are computer-based entities that interact with humans to help accomplish a certain task or communicate information via spoken input and output. A challenging aspect of this task involves open dialog, where the user is free to converse in an unstructured manner. With this style of input, the machine’s ability to communicate may be hindered by poor reception of utterances, caused by a user’s inadequate command of a language and/or faults in the speech recognition facilities. Since a speech-based input is emphasized, this endeavor involves the fundamental issues associated with natural language processing, automatic speech recognition and dialog system design. Driven by ContextBased Reasoning, the presented dialog manager features a discourse model that implements mixed-initiative conversation with a focus on the user’s assistive needs. The discourse behavior must maintain a sense of generality, where the assistive nature of the system remains constant regardless of its knowledge corpus. The dialog manager was encapsulated into a speech-based embodied conversation agent platform for prototyping and testing purposes. A battery of user trials was performed on this agent to evaluate its performance as a robust, domain-independent, speech-based interaction entity capable of satisfying the needs of its users
The future of venture capital decision making : the impact of quantitative sourcing and machine learning on the VC Investment process
Investing in early-stage startups is a difficult endeavor. Venture Capitalists use heuristics and
base their decisions on past experiences, which can lead to biases. Recently, Venture Capitalists
are increasingly using artificial intelligence and quantitative sourcing to support their
investment process, while others still rely on traditional investment mechanisms. This research
investigates the usage and impact of artificial intelligence and machine learning throughout the
venture investment cycle to make investment decisions. This dissertation is an exploratory
study that employs a qualitative research approach in the form of semi-structured interviews
with ten European Venture Capitalists. The results show that Venture Capitalists utilize
machine learning and web scraper tools, particularly during the deal origination, firm-specific
screening, and general screening stages of the investment process, to solve the identification
and selection challenges. As a result, investment processes become more efficient and less
biased, allowing for more time to be spent advising and mentoring portfolio startups. It adds to
the existing literature on how artificial intelligence and data can augment existing investment
mechanisms during the venture capital decision-making process.Investir em startups na sua fase inicial exige um elevado empenho. Os investidores de capital
de risco baseiam as suas decisões em pesquisa e experiências passadas, o que pode levar a
enviesamentos. Embora muitos investidores de capital de risco ainda utilizem mecanismos de
investimento tradicionais, tem havido um aumento na utilização de inteligência artificial e
sourcing quantitativo para apoiar o processo de investimento. Esta investigação estuda a
utilização e impacto da inteligência artificial e de machine learning ao longo do ciclo de
investimento de risco para tomar decisões de investimento. Esta dissertação é um estudo
empírico que utiliza uma abordagem de investigação qualitativa sob a forma de entrevistas
semi-estruturadas com dez empresas de investimento de capital de risco europeias. Os
resultados mostram que os investidores de capital de risco utilizam machine learning e
ferramentas de recolha de dados na web, em particular durante o início da oportunidade de
negócio, a seleção específica da empresa, e fases gerais de análise do processo de investimento,
para resolver os desafios de identificação e seleção. Consequentemente, os processos de
investimento tornam-se mais eficientes e menos tendenciosos, permitindo que se utilize mais
tempo a aconselhar e a orientar as empresas do portfolio. Este estudo complementa a literatura
existente relativamente a como a inteligência artificial e os dados podem elevar os mecanismos
de investimento existentes durante o processo de tomada de decisão de capital de risco
Mastering Overdetection and Underdetection in Learner-Answer Processing: Simple Techniques for Analysis and Diagnosis.
International audienceThis paper presents a "didactic triangulation" strategy to cope with the problem of reliability of NLP applications for Computer Assisted Language Learning (CALL) systems. It is based on the implementation of basic but well mastered NLP techniques, and put the emphasis on an adapted gearing between computable linguistic clues and didactic features of the evaluated activities. We claim that a correct balance between noise (i.e. false error detection) - and silence (i.e. undetected errors) is not only an outcome of NLP techniques, but of an appropriate didactic integration of what NLP can do well - and what it cannot do. Based on this approach, ExoGen is a prototype for generating activities such as gapfill exercises. It integrates a module for error detection and description, which checks learners' answers against expected ones. Through the analysis of graphic, orthographic and morphosyntactic differences, it is able to diagnose problems like spelling errors, lexical mix-ups, errors prone agreement, conjugation errors, etc. The first evaluation of ExoGen outputs, based on the FRIDA learner corpus, has yielded very promising results, paving the way for the development of an efficient and general model adapted to a wide variety of activities
A Theory Explains Deep Learning
This is our journal for developing Deduction Theory and studying Deep Learning and Artificial intelligence. Deduction Theory is a Theory of Deducing World’s Relativity by Information Coupling and Asymmetry. We focus on information processing, see intelligence as an information structure that relatively close object-oriented, probability-oriented, unsupervised learning, relativity information processing and massive automated information processing. We see deep learning and machine learning as an attempt to make all types of information processing relatively close to probability information processing. We will discuss about how to understand Deep Learning and Artificial intelligence and why Deep Learning is shown better performance than the other methods by metaphysical logic
Language Processing and the Artificial Mind: Teaching Code Literacy in the Humanities
Humanities majors often find themselves in jobs where they either manage programmers or work with them in close collaboration. These interactions often pose difficulties because specialists in literature, history, philosophy, and so on are not usually code literate. They do not understand what tasks computers are best suited to, or how programmers solve problems. Learning code literacy would be a great benefit to humanities majors, but the traditional computer science curriculum is heavily math oriented, and students outside of science and technology majors are often math averse. Yet they are often interested in language, linguistics, and science fiction. This thesis is a case study to explore whether computational linguistics and artificial intelligence provide a suitable setting for teaching basic code literacy. I researched, designed, and taught a course called “Language Processing and the Artificial Mind.” Instead of math, it focuses on language processing, artificial intelligence, and the formidable challenges that programmers face when trying to create machines that understand natural language. This thesis is a detailed description of the material, how the material was chosen, and the outcome for student learning. Student performance on exams indicates that students learned code literacy basics and important linguistics issues in natural language processing. An exit survey indicates that students found the course to be valuable, though a minority reacted negatively to the material on programming. Future studies should explore teaching code literacy with less programming and new ways to make coding more interesting to the target audience
Customer interactions with AI: How can Marley Spoon optimize its chatbot performance to improve the touchpoint experience along the customer journey?
The purpose of this in-company project is to identify chatbot optimization recommendations
for Marley Spoon to improve the touchpoint experience along the customer journey. Customer
interactions with Artificial Intelligence became a relevant part of communication channels
within business processes and are already applied in many marketing strategies. Grace to its
machine learning capability, chatbots can combine natural language processing and natural
language understanding in order to offer an automated customer experience.
Nowadays, AI-chatbots are not only able to operate on a mechanical and thinking level, but are
also developing on a feeling level. Hence, chatbots can also understand human emotions and
adapt empathically to different moods and circumstances. In this way, a well implemented
chatbot should not only be used as a simple FAQ machine, but also be implemented for
different marketing purposes such as customer attraction and retention.
The results of this research are based on a profound literature review with recent articles of
well-respected researchers in this field. Moreover, a primary research was conducted in form
of in-depth interviews with different specialist of the company and a customer satisfaction
survey collected by the chatbot platform. Deriving from the findings of this research, there are
three recommendations provided to the company, which should be implemented to improve
the touchpoint experience. Those three implementations should be a be a new chatbot interface
with more customer engagement, integrating the chatbot to different customer journey stages
and setting up a chatbot superteam with specified scope and responsibilities.O objetivo deste projeto em empresa é identificar recomendações de otimização de chatbot
para Marley Spoon, de modo a melhorar a experiência touchpoint ao longo da jornada do
cliente. As interações dos clientes com a Inteligência Artificial tornaram-se uma parte crucial
dos canais de comunicação integradas nos processos de negócios, sendo que já estão a ser
aplicadas em muitas estratégias de marketing. Devido à capacidade de aprendizagem, os
chatbots podem combinar processamento de linguagem natural com compreensão de
linguagem natural, de maneira a oferecer uma experiência automatizada ao cliente.
Nos dias de hoje, os AI-chatbots não só são capazes de operar num nível mecânico e de
pensamento, mas também estão desenvolvidos a nível de sentimento. Os chatbots podem
inclusivamente entender as emoções humanas e adaptar-se efetivamente diferentes estados de
espírito e circunstâncias. Desta forma, um chatbot eficiente não deve ser usado apenas como
uma simples máquina de resposta a perguntas frequentes, mas também deve ser utilizado para
diferentes fins de marketing, como a atração e retenção de clientes.
Os resultados da pesquisa foram retirados da análise de conceitos teóricos da literatura
científica, focada em artigos recentes de investigadores referenciados nessa área. A pesquisa
primária foi realizada em forma de entrevistas com diferentes especialistas da empresa e,
também, através de uma pesquisa de satisfação de cliente na plataforma chatbot. Com base nos
resultados desta pesquisa, há três recomendações facultadas à empresa, que devem ser
implementadas para melhorar a experiência touchpoint. Uma nova interface chatbot com mais
comprometimento com o cliente, integrando o chatbot em diferentes estágios da jornada do
cliente e configurando uma superteam chatbot com intuito e responsabilidades especificados
A web-based AI assistant Application using Python and JavaScript
Our research is mainly based on a chatbot which is powered by Artificial Intelligence. Nowadays, Artificial Intelligence assistants such as Apple’s Siri, Google’s Now and Amazon’s Alexa are currently fast-growing and widely integrated with many smart devices. These assistants are built with the primary purpose of being personal assistants for every individual user in certain contexts. In this research, we would highlight the development process of the chatbots, features, problems, case studies and limitations.
This research delivers the information, helps developers to build answer bots and integrate chatbots with business accounts. The aim is to assist users and allow transactions between client companies and their customers. As a result, users can accomplish results to queries as well as clients can grow their business
- …