• Grundlagen der Künstlichen Intelligenz &
    DES Maschinellen Lernens

    Die Menge an verfügbaren Informationen zu Künstlicher Intelligenz und Maschinellem Lernen im Netz ist sehr umfangreich. Damit Sie sich besser zurecht finden, haben wir auf dieser Seite nützliche Ressourcen übersichtlich zusammengestellt. Auf diese Weise wollen wir Einsteiger und Fortgeschrittene möglichst effizient auf den aktuellen Stand bringen. Die gewählten Ressourcen umfassen Onlineartikel für den schnellen Einstieg, Bücher für tiefergehendes Wissen, Onlinekurse für die praktische Anwendung des Gelernten sowie wissenschaftliche Artikel, die den State-of-the-Art auf verschiedenen Gebieten beschreiben.

  • Ressourcen für Einsteiger

    The Business of Artificial Intelligence - Erik Brynjolfsson & Andrew McAfee

    Artikel

    Abstract For more than 250 years the fundamental drivers of economic growth have been technological innovations. The most important of these are what economists call general-purpose technologies — a category that includes the steam engine, electricity, and the internal combustion engine. Each one catalyzed waves of complementary innovations and opportunities. The internal combustion engine, for example, gave rise to cars, trucks, airplanes, chain saws, and lawnmowers, along with big-box retailers, shopping centers, cross-docking warehouses, new supply chains, and, when you think about it, suburbs. Companies as diverse as Walmart, UPS, and Uber found ways to leverage the technology to create profitable new business models.

    Artificial Intelligence for the Real World - Thomas H. Davenport & Rajeev Ronanki

    Artikel

    Abstract Cognitive technologies are increasingly being used to solve business problems; indeed, many executives believe that AI will substantially transform their companies within three years. But many of the most ambitious AI projects encounter setbacks or fail. A survey of 250 executives familiar with their companies' use of cognitive technology and a study of 152 projects show that companies do better by taking an incremental rather than a transformative approach to developing and implementing AI, and by focusing on augmenting rather than replacing human capabilities. Broadly speaking, AI can support three important business needs: automating business processes (typically back-office administrative and financial activities), gaining insight through data analysis, and engaging with customers and employees. To get the most out of AI, firms must understand which technologies perform what types of tasks, create a prioritized portfolio of projects based on business needs, and develop plans to scale up across the company.

    Prediction Machines: The Simple Economics of Artificial Intelligence - Ajay Agrawal & Joshua Gans

    Buch

    Abstract Artificial intelligence does the seemingly impossible, magically bringing machines to life--driving cars, trading stocks, and teaching children. But facing the sea change that AI will bring can be paralyzing. How should companies set strategies, governments design policies, and people plan their lives for a world so different from what we know? In the face of such uncertainty, many analysts either cower in fear or predict an impossibly sunny future.

    But in Prediction Machines, three eminent economists recast the rise of AI as a drop in the cost of prediction. With this single, masterful stroke, they lift the curtain on the AI-is-magic hype and show how basic tools from economics provide clarity about the AI revolution and a basis for action by CEOs, managers, policy makers, investors, and entrepreneurs.

    Applied Artificial Intelligence: A Handbook for Business Leaders - Mariya Yao, Adelyn Zhou & Marlene Jia

    Buch

    Abstract "Artificial intelligence" is the buzz word of the day. You've no doubt read your fair share of media hype either proclaiming doom and gloom where robots seize our jobs or prophesying a new utopia where AI cures all our human problems. But what does it actually mean for your role as a business leader? Applied Artificial Intelligence is a practical guide for business leaders who are passionate about leveraging machine intelligence to enhance the productivity of their organizations and the quality of life in their communities. If you want to drive innovation by combining data, technology, design, and people to solve real problems at an enterprise scale, this is your playbook. This book does not overload you with details on debugging TensorFlow code nor bore you with generalizations about the future of humanity. Instead, we teach you how to lead successful AI initiatives by prioritizing the right opportunities, building a diverse team of experts, conducting strategic experiments, and consciously designing your solutions to benefit both your organization and society as a whole. This book is focused on helping you drive concrete business decisions through applications of artificial intelligence and machine learning. Written with the combined knowledge of three experts in the field, Applied Artificial Intelligence is the best practical guide for business leaders looking to get true value from the adoption of machine learning technology.

    Hands-On Machine Learning with Sciki-Learn & TensorFlow - Aurélien Géron

    Buch

    Abstract Through a series of recent breakthroughs, deep learning has boosted the entire field of machine learning. Now, even programmers who know close to nothing about this technology can use simple, efficient tools to implement programs capable of learning from data. This practical book shows you how.

    By using concrete examples, minimal theory, and two production-ready Python frameworks—scikit-learn and TensorFlow—author Aurélien Géron helps you gain an intuitive understanding of the concepts and tools for building intelligent systems. You’ll learn a range of techniques, starting with simple linear regression and progressing to deep neural networks. With exercises in each chapter to help you apply what you’ve learned, all you need is programming experience to get started.

    Data Science mit Python - Jake VanderPlass

    Buch

    Abstract Python ist für viele die erste Wahl für Data Science, weil eine Vielzahl von Ressourcen und Bibliotheken zum Speichern, Bearbeiten und Auswerten von Daten verfügbar ist. In diesem Buch erläutert der Autor den Einsatz der wichtigsten Tools.

    Für Datenanalytiker und Wissenschaftler ist dieses umfassende Handbuch von unschätzbarem Wert für jede Art von Berechnung mit Python sowie bei der Erledigung alltäglicher Aufgaben. Dazu gehören das Bearbeiten, Umwandeln und Bereinigen von Daten, die Visualisierung verschiedener Datentypen und die Nutzung von Daten zum Erstellen von Statistiken oder Machine-Learning-Modellen.

    Machine Learning - Andrew Ng

    Onlinekurs

    Abstract Machine learning is the science of getting computers to act without being explicitly programmed. In the past decade, machine learning has given us self-driving cars, practical speech recognition, effective web search, and a vastly improved understanding of the human genome. Machine learning is so pervasive today that you probably use it dozens of times a day without knowing it. Many researchers also think it is the best way to make progress towards human-level AI. In this class, you will learn about the most effective machine learning techniques, and gain practice implementing them and getting them to work for yourself. More importantly, you'll learn about not only the theoretical underpinnings of learning, but also gain the practical know-how needed to quickly and powerfully apply these techniques to new problems. Finally, you'll learn about some of Silicon Valley's best practices in innovation as it pertains to machine learning and AI. This course provides a broad introduction to machine learning, datamining, and statistical pattern recognition. Topics include: (i) Supervised learning (parametric/non-parametric algorithms, support vector machines, kernels, neural networks). (ii) Unsupervised learning (clustering, dimensionality reduction, recommender systems, deep learning). (iii) Best practices in machine learning (bias/variance theory; innovation process in machine learning and AI). The course will also draw from numerous case studies and applications, so that you'll also learn how to apply learning algorithms to building smart robots (perception, control), text understanding (web search, anti-spam), computer vision, medical informatics, audio, database mining, and other areas.

    Deep Learning for Coder, Part 1 - Jeremy Howard

    Onlinekurs

    Abstract You will learn the practical details of deep learning applications with hands-on model building using Pytorch and work on problems ranging from computer vision, natural language processing, and recommendation systems.

    After finishing this course you be able to:

    • apply transfer learning to image classification problems
    • use neural networks for recommendation algorithms
    • use recurrent neural networks and convolutional neural networks for text classification problems
    • apply neural networks to tabular data and learn embeddings for categorical variables
    • learn sentences that describe an image

    Previous experience programming in Python and some machine learning background is advised to make best use of the course.

    Künstliche Intelligenz: Ein moderner Ansatz - Stuart Russell & Peter Norvig

    Buch

    Abstract Die dritte Auflage dieses Informatik-Klassikers wurde von Grund auf komplett überarbeitet und an die neuesten Entwicklungen der KI angepasst. Die Autoren verstehen es dabei, die KI in ihrem ganzen Themenspektrum für die Studierenden verständlich und nachvollziehbar dazustellen. Sie behandeln alle relevanten Aspekte der KI von der Logik und der Wahrscheinlichkeitstheorie über den Bereich des Wahrnehmens, Denkens, Lernens und Handelns bis zu mikroelektronischen Geräten und Robotern. Erweitert um moderne Such- und Sprachalgorithmen sowie Lernen mit neuronalen Netzen setzt diese Werk einen neuen Standard, den kein anderes Werk derzeit zu leisten vermag.

    Reinforcement Learning: An Introduction - Richard S. Sutton & Andrew G. Barto

    Buch

    Abstract Reinforcement learning, one of the most active research areas in artificial intelligence, is a computational approach to learning whereby an agent tries to maximize the total amount of reward it receives when interacting with a complex, uncertain environment. In Reinforcement Learning, Richard Sutton and Andrew Barto provide a clear and simple account of the key ideas and algorithms of reinforcement learning. Their discussion ranges from the history of the field's intellectual foundations to the most recent developments and applications. The only necessary mathematical background is familiarity with elementary concepts of probability.

  • Ressourcen für Fortgeschrittene

    Pattern Recognition and Machine Learning - Christopher M. Bishop

    Buch

    Abstract This is the first textbook on pattern recognition to present the Bayesian viewpoint. The book presents approximate inference algorithms that permit fast approximate answers in situations where exact answers are not feasible. It uses graphical models to describe probability distributions when no other books apply graphical models to machine learning. No previous knowledge of pattern recognition or machine learning concepts is assumed. Familiarity with multivariate calculus and basic linear algebra is required, and some experience in the use of probabilities would be helpful though not essential as the book includes a self-contained introduction to basic probability theory.

    Machine Learning: A Probabilistic Perspective - Kevin P. Murphy

    Buch

    Abstract Today's Web-enabled deluge of electronic data calls for automated methods of data analysis. Machine learning provides these, developing methods that can automatically detect patterns in data and then use the uncovered patterns to predict future data. This textbook offers a comprehensive and self-contained introduction to the field of machine learning, based on a unified, probabilistic approach.

    The coverage combines breadth and depth, offering necessary background material on such topics as probability, optimization, and linear algebra as well as discussion of recent developments in the field, including conditional random fields, L1 regularization, and deep learning. The book is written in an informal, accessible style, complete with pseudo-code for the most important algorithms. All topics are copiously illustrated with color images and worked examples drawn from such application domains as biology, text processing, computer vision, and robotics. Rather than providing a cookbook of different heuristic methods, the book stresses a principled model-based approach, often using the language of graphical models to specify models in a concise and intuitive way. Almost all the models described have been implemented in a MATLAB software package�PMTK (probabilistic modeling toolkit)�that is freely available online. The book is suitable for upper-level undergraduates with an introductory-level college math background and beginning graduate students.

    Deep Learning - Ian Goodfellow, Yoshua Bengio & Aaron Courville

    Buch

    Abstract Deep learning is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts. Because the computer gathers knowledge from experience, there is no need for a human computer operator to formally specify all the knowledge that the computer needs. The hierarchy of concepts allows the computer to learn complicated concepts by building them out of simpler ones; a graph of these hierarchies would be many layers deep. This book introduces a broad range of topics in deep learning.

    The text offers mathematical and conceptual background, covering relevant concepts in linear algebra, probability theory and information theory, numerical computation, and machine learning. It describes deep learning techniques used by practitioners in industry, including deep feedforward networks, regularization, optimization algorithms, convolutional networks, sequence modeling, and practical methodology; and it surveys such applications as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames. Finally, the book offers research perspectives, covering such theoretical topics as linear factor models, autoencoders, representation learning, structured probabilistic models, Monte Carlo methods, the partition function, approximate inference, and deep generative models.

    Deep Learning can be used by undergraduate or graduate students planning careers in either industry or research, and by software engineers who want to begin using deep learning in their products or platforms. A website offers supplementary material for both readers and instructors.

    Deep Reinforcement Learning Hands-On - Maxim Lapan

    Buch

    Abstract Recent developments in reinforcement learning (RL), combined with deep learning (DL), have seen unprecedented progress made towards training agents to solve complex problems in a human-like way. Google's use of algorithms to play and defeat the well-known Atari arcade games has propelled the field to prominence, and researchers are generating new ideas at a rapid pace.

    Deep Reinforcement Learning Hands-On is a comprehensive guide to the very latest DL tools and their limitations. You will evaluate methods including Cross-entropy and policy gradients, before applying them to real-world environments. Take on both the Atari set of virtual games and family favorites such as Connect4. The book provides an introduction to the basics of RL, giving you the know-how to code intelligent learning agents to take on a formidable array of practical tasks. Discover how to implement Q-learning on 'grid world' environments, teach your agent to buy and trade stocks, and find out how natural language models are driving the boom in chatbots.

    Some fluency in Python is assumed. Basic deep learning (DL) approaches should be familiar to readers and some practical experience in DL will be helpful. This book is an introduction to deep reinforcement learning (RL) and requires no background in RL.

    Cutting Edge Deep Learning For Coders, Part 2 - Jeremy Howard

    Onlinekurs

    Abstract Welcome to the new 2018 edition of fast.ai's second 7 week course, Cutting Edge Deep Learning For Coders, Part 2, where you'll learn the latest developments in deep learning, how to read and implement new academic papers, and how to solve challenging end-to-end problems such as natural language translation. You'll develop a deep understanding of neural network foundations, the most important recent advances in the fields, and how to implement them in the world's fastest deep learning libraries, fastai and pytorch.

    We will be assuming familiarity with everything from part 1, such as: CNNs (including resnets), RNNs (including LSTM and GRU), SGD/Adam/etc, batch normalization, data augmentation, PyTorch, and numpy.

    Deep Residual Learning for Image Recognition - He et al.

    Wissenschaftlicher Artikel

    Abstract Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers.
    The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.

    You Only Look Once: Unified, Real-Time Object Detection  - Redmon et al.

    Wissenschaftlicher Artikel

    Abstract We present YOLO, a new approach to object detection. Prior work on object detection repurposes classifiers to perform detection. Instead, we frame object detection as a regression problem to spatially separated bounding boxes and associated class probabilities. A single neural network predicts bounding boxes and class probabilities directly from full images in one evaluation. Since the whole detection pipeline is a single network, it can be optimized end-to-end directly on detection performance.
    Our unified architecture is extremely fast. Our base YOLO model processes images in real-time at 45 frames per second. A smaller version of the network, Fast YOLO, processes an astounding 155 frames per second while still achieving double the mAP of other real-time detectors. Compared to state-of-the-art detection systems, YOLO makes more localization errors but is far less likely to predict false detections where nothing exists. Finally, YOLO learns very general representations of objects. It outperforms all other detection methods, including DPM and R-CNN, by a wide margin when generalizing from natural images to artwork on both the Picasso Dataset and the People-Art Dataset.

    Generative Adversarial Nets  - Goodfellow et al.

    Wissenschaftlicher Artikel

    Abstract We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to 1 2 everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples.

    Mastering the Game of Go without human knowledge  - Silver et al.

    Wissenschaftlicher Artikel

    Abstract A long-standing goal of artificial intelligence is an algorithm that learns, tabula rasa, superhuman proficiency in challenging domains. Recently, AlphaGo became the first program to defeat a world champion in the game of Go. The tree search in AlphaGo evaluated positions and selected moves using deep neural networks. These neural networks were trained by supervised learning from human expert moves, and by reinforcement learning from self-play. Here we introduce an algorithm based solely on reinforcement learning, without human data, guidance or domain knowledge beyond game rules. AlphaGo becomes its own teacher: a neural network is trained to predict AlphaGo’s own move selections and also the winner of AlphaGo’s games. This neural network improves the strength of the tree search, resulting in higher quality move selection and stronger self-play in the next iteration. Starting tabula rasa, our new program AlphaGo Zero achieved superhuman performance, winning 100–0 against the previously published, champion-defeating AlphaGo.

    TensorFlow: A system for large-scale machine learning  - Abadi et al.

    Wissenschaftlicher Artikel

    Abstract TensorFlow is a machine learning system that operates at large scale and in heterogeneous environments. TensorFlow uses dataflow graphs to represent computation, shared state, and the operations that mutate that state. It maps the nodes of a dataflow graph across many machines in a cluster, and within a machine across multiple computational devices, including multicore CPUs, general-purpose GPUs, and custom-designed ASICs known as Tensor Processing Units (TPUs). This architecture gives flexibility to the application developer: whereas in previous “parameter server” designs the management of shared state is built into the system, TensorFlow enables developers to experiment with novel optimizations and training algorithms. TensorFlow supports a variety of applications, with a focus on training and inference on deep neural networks. Several Google services use TensorFlow in production, we have released it as an open-source project, and it has become widely used for machine learning research. In this paper, we describe the TensorFlow dataflow model and demonstrate the compelling performance that Tensor- Flow achieves for several real-world applications.