Which machine learning algorithm is publicly available?

What is Artificial Intelligence?

      1. What is Artificial Intelligence?
      2. AI: definition and history
      3. Strong AI vs. weak AI
      4. Artificial intelligence and pattern recognition
      5. Artificial Intelligence and Language
      6. Differences and similarities: AI, machine learning, deep learning
      7. Limits and possibilities: what can artificial intelligence (not) do?
      8. Ethics and AI: what is artificial intelligence allowed (not) to do?
      9. Programming artificial intelligence

 

 

Artificial intelligence is ubiquitous. Whether in voice assistants, chatbots, semantic text analysis, streaming services, smart factories or autonomous vehicles - AI will change the way we shape our professional and private everyday life as well as how we do business and live together as a society. Politicians also declare AI to be a fundamental condition of our future prosperity.

And although more and more people are using artificial intelligence, only a few know what exactly it is. This is not surprising: Defining artificial intelligence clearly is a difficult undertaking. Just as human intelligence cannot be clearly described - for example, a distinction is made between cognitive, emotional and social intelligence - there is no generally applicable definition for artificial intelligence that is consistently used by all actors. Rather, it is a generic term for all research areas that deal with how machines can provide a performance of human intelligence. The following limitation should therefore try to create some clarity and transparency.

 

AI: definition and history

Historically, the term goes back to the American computer scientist John McCarthy, who in 1956 invited researchers from various disciplines to a workshop entitled “Dartmouth Summer Research Project on Artificial Intelligence”. The main theme of the meeting was: "The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it." the foundation stone was laid in 1956 for what later became the field of artificial intelligence.

AI: simulation and automation of cognitive skills

Today, numerous lexicon entries define artificial intelligence as a branch of computer science that deals with the machine imitation of human intelligence. The English Oxford Living Dictionary describes AI as follows: "The theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages."

Meanwhile, AI experts in research and practice have agreed on a similarly abstract working definition: Artificial intelligence is the automation and / or the simulation of cognitive abilities, including visual perception, speech recognition and generation, reasoning, decision-making and action, as well as the ability to adapt to changing environments in general.

The performance of these simulated and / or automated cognitive abilities can vary greatly. While they are still very rudimentary with voice assistants such as Alexa and Siri, they already far exceed human capabilities in some areas - for example in medicine with the millions of evaluations of MRI scans.

 

Strong AI vs. weak AI

In a very abstract way, the development directions of artificial intelligence can be divided into two categories: weak and strong AI. Weak AI (also: weak or narrow AI) encompasses the majority of all development activities and enables efficient simulation of specific individual human abilities. The strong AI, which has the same or even higher intellectual abilities than humans, is currently still very far from reality.

Strong AI

The strong AI is not only able to act in a purely reactive manner, but also is creative, flexible, capable of making decisions even when there is uncertainty and is motivated on his own initiative - and therefore able to act proactively and in a planned manner According to experts, however, such AI does not currently exist, nor is its existence foreseeable.

 

In science and philosophy it is highly controversial whether and when a strong AI can be developed at all. One of the biggest points of contention is the question of whether an AI will ever have empathy, self-reflection and awareness - properties that (to date) belong to the core of being human. Therefore, statements that announce or promise the existence of such a strong or general AI (also: AGI or Artificial General Intelligence) should be viewed with skepticism. Excessive expectations of AI, which are often tagged with the terms super intelligence or singularity and fuel exaggerated fears of robot rule, only lead to a populist debate. They are anything but conducive to a transparent discourse.

Weak AI

Weak AI, on the other hand, focuses on solving individual application problems, with the systems developed being capable of self-optimization and learning. To this end, an attempt is made to simulate and automate specific aspects of human intelligence. Most of the commercial AI applications currently in existence are weak AI systems. Weak AI systems are currently used in the following specific fields of application:

  • Digital language and text processing (Natural Language Processing): AI systems that understand or generate the content and context of texts and language fully or partially automatically. In this way, for example, football or election reports can be typed, texts can be translated and chatbots or voice assistants can communicate.
  • Robotics & autonomous machines: Smart and autonomously navigating (transport) machines such as drones, cars and rail vehicles that can adapt themselves to new environmental situations and learn in real time.
  • Pattern recognition in large data sets: Control and optimization of infrastructures (e.g. in the flow of road traffic or in the power grid); Identification of fraud, money laundering or terrorist financing in the financial industry; Predictive policing in the fight against crime; AI-based diagnostic systems in the health sector (e.g. evaluation of radiological image data) etc.

 

Artificial intelligence and pattern recognition

In the broad field of application of artificial intelligence, pattern recognition (also: pattern recognition) plays a special role. On the one hand, because numerous current advances in AI can be traced back to advances in pattern recognition, and on the other hand, because different fields of application (e.g. image, text and speech recognition, etc.) also use pattern recognition at least in part.

With her it's about Extract meaningful and relevant information from large, unstructured amounts of data by automatically recording regularities, repetitions or similarities. The basis is the ability to classify: Features must be identified that are identical within a feature category, but do not appear outside of this category. In this way, faces can be recognized in digital photos, songs identified or traffic signs filtered from a flood of image data. The systematic recognition of patterns is also of great relevance in speech and text recognition.

 

Artificial Intelligence and Language

One of the most challenging and at the same time most exciting areas of application of artificial intelligence is the machine processing of natural language - better known under the term Natural Language Processing (NLP). As an interdisciplinary cross-sectional discipline between linguistics and artificial intelligence, the goal is to develop algorithms that break down and machine elements of human language. That means: Everything that people express in writing or verbally, NLP can translate into digitally readable information. However, this process also works in the opposite direction: data can also be processed in speech or text. These two process directions mark the two sub-disciplines in which NLP can be divided: Natural Language Understanding (also: NLU) and Natural Language Generation (also: NLG or automatic language and text generation).

Natural Language Generation

While the translation of natural language or text into data is a typical form of Natural Language Understanding, the opposite direction is called Natural Language Generation. With NLU, natural text is usually processed into data, with NLG processes natural text is created through data. In all those areas where structured data is generated - for example in e-commerce, in the financial world or in reporting for sports, weather or elections - NLG programs can create reader-friendly texts from data in a matter of seconds. In this way, NLG systems free copywriters and editors from monotonous routine work. The time saved can thus be invested more in creative or conceptual work.

Natural Language Understanding

Natural Language Understanding, on the other hand, aims to achieve a "Understand" natural language text and generate structured data from it. The generic term NLU can be applied to a wide variety of computer applications, ranging from small, relatively simple tasks such as brief commands to robots, to highly complex tasks such as fully understanding newspaper articles.

 

Differences and similarities: AI, machine learning, deep learning

The terms machine learning and deep learning are closely related to the term artificial intelligence. The terms are often used synonymously in public discussion. In the following, a brief classification of the terms should lead to a transparent use of the different terminologies.

While artificial intelligence serves as a generic term for all research and development areas that - as already shown above - deal with the When dealing with simulation and automation of cognitive skills, machine learning and deep learning can be understood as partial terms of AI. Machine learning in particular is often understood to be congruent with AI, but it is much more of a sub-area of ​​it. Basically, however, the vast majority of current advances in AI applications relate to machine learning. It therefore seems all the more helpful to first take a closer look at the term machine learning.

Machine learning

Machine learning (also: machine learning, ML) is a specific category of algorithms that use statistics to find patterns in large amounts of data, known as big data. They then use the patterns found in the historical (and at best representative) data to make predictions about certain events - such as which series a user might like on Netflix or what exactly is meant by a specific voice input on Alexa. With machine learning, algorithms are therefore able to learn patterns from large data sets and independently find the solution to a specific problem without explicitly programming each individual case beforehand. With the help of machine learning, systems are therefore able to generate knowledge from experience. In this sense, the term was described by the US computer scientist and AI pioneer Arthur L. Samuel as early as 1959 as a system that has the "ability to learn without having been explicitly programmed".

Extract relevant data from big data and make predictions

In practice this means the following: In streaming services, algorithms learn (without prior programming in any way which series genres there are), for example, that there are certain types of series that are watched by a certain class of users. In contrast to rule-based systems, with machine learning it is therefore not absolutely necessary to implement specific if-then rules for each newly occurring individual case, which are then applied to a data set (e.g. for classification purposes). Rather, machine learning uses the existing data set to independently extract and summarize relevant data and thereby make predictions.

They can therefore be used to optimize or (partially) automate processes that would otherwise have to be done manually, such as text or image recognition. Machine learning is the process that powers many of the services we use today - recommendation systems like Netflix, YouTube, Spotify, search engines like Google and Baidu, social media feeds like Facebook and Twitter, voice assistants like Siri and Alexa. In all of these cases, each platform collects as much data as possible about its users - which genres are liked, which links are clicked on, which songs are preferred to be listened to - and uses machine learning to provide as accurate an estimate as possible of what users are interested in dearest want to see or hear.

Deep learning

Deep learning is assigned as a sub-term to machine learning and is therefore also to be understood as a sub-area of ​​artificial intelligence. While ML is a kind of self-adaptive algorithm that improves through experience or historical data, Deep learning has the ability to significantly strengthen the process of machine learning and to train itself. The technique used to do this is called a neural network. It is a kind of mathematical model, the structure of which is based on the functioning of the human brain.

Neural networks and black boxes

Neural networks contain numerous layers of computing nodes (similar to human neurons) that work together in an orchestrated manner to search through data and provide an end result. Since the contents of these layers are becoming increasingly abstract and less comprehensible, these layers are also referred to as hidden layers. Through the interaction of several of these layers, “new” information can be formed between the layers, which represents a kind of abstract representation of the original information or the input signals. Even developers are therefore not or only to a limited extent able to understand what the networks are learning in the process or how they came to a certain result. One speaks here of the so-called black box character of AI systems. Finally, a distinction is made between three variants of learning in machine and deep learning: monitored, unsupervised and reinforced.

Supervised learning

In supervised learning, are pre-classifies the data to be analyzed to tell the ML system what patterns to look for. The automatic classification of images is learned according to this principle: First, images are manually marked with regard to certain variables (e.g. whether the facial expression is sad, happy or neutral); After creating thousands of examples, an algorithm can then automatically categorize the image data.

Unsupervised learning

With unsupervised learning (also: unsupervised learning), the data to be analyzed do not have any previously classified names. Therefore, the algorithm does not have to be provided with any exact target specifications in an upstream training phase. Rather, the ML system itself looks for whatever patterns it can find. Unsupervised learning methods are therefore preferred for exploring large data sets. Unsupervised technologies are currently rather uncommon in practice (with the exception of cybersecurity).

Reinforcement learning

The method in which an algorithm learns through reward and punishment is described as reinforcement learning. A reinforcement algorithm learns through pure trial and error whether a goal will be achieved (reward) or not (punishment). Reinforcement learning is used, for example, when training chess programs: In a (simulated) game against other chess programs, a system can learn very quickly whether a certain behavior led to the desired goal, namely victory (reward) or not (punishment). Reinforcement learning is also the training foundation of Google's AlphaGo, the program that defeated the best human players in the complex game of Go.

 

Limits and possibilities: what can artificial intelligence (not) do?

Not only in media discourses, but also in expert circles, there are sometimes quite different definitions of artificial intelligence. However, unclear ideas and definitions of what AI is and not is, what it can and cannot do, contribute more to uncertainty than to acceptance in society. They lead to an often polarized debate driven by unrealistic ideas. Education about the limits and possibilities of artificial intelligence is therefore of the greatest relevance. This is the only way to realistically assess the impact of AI on society, the economy, culture and science.

Overall, high hopes are associated with the use of artificial intelligence: AI-based medical (cancer) diagnostics, for example, promise major advances in the health sector and in road traffic a reduction in accidents or traffic jams could lead to a lower number of road deaths on the one hand and to a lower number on the other Lead to environmental pollution. The way we work also seems to be facing disruptive changes: AI could relieve workers of dangerous and monotonous work.

On the other hand, technology-critical skeptics warn against the use of AI with dystopian forecasts for the future and the resulting supposed takeover of power by a superintelligence or singularity. Even Stephen Hawking and tech visionary Elon Musk have warned of the threat posed by AI. However, it should be noted that such fears relate far less to (weak) AI systems that have existed to date.

A balanced debate in which the (possible) advantages and disadvantages of the development and implementation of AI can be discussed transparently and in an informed manner is essential. Ultimately, the goal of all AI systems should be to create social, cultural and economic added value and thus contribute to the well-being of people. Artificial intelligence should provide intelligent support to people in everyday life and at work where it makes sense and should take on unpleasant or dangerous work without making people superfluous. The result: More time and resources to devote yourself to creative or emotionally and socially valuable tasks that make people happy and that create meaning and added value in society, business and culture.

Ethics and AI: what is artificial intelligence allowed (not) to do?

Artificial intelligence is increasingly permeating our everyday life; and with it the question of ethical and social responsibility. Already today algorithms decide which messages the reader gets and which products the consumer gets displayed. Most of them are neither aware of nor understandable on what basis and by what technological mechanisms these decisions are made. It also becomes problematic when an AI system makes incorrect or even discriminatory decisions or when the possibilities of AI for control, monitoring and interference with the private sphere of citizens, for example by their governments, are abused. The general and very prominent question therefore arises as to what artificial intelligence is allowed to do and what is not.

The European Commission was the first international institution to address these issues and to set down criteria for the development and application of trustworthy AI on the basis of the EU Charter of Fundamental Rights and under the title “Ethics Guidelines for Trustworthy AI”. Four subject areas were defined as decisive for a “Trustworthy AI”: Fairness, transparency & traceability, responsibility and value orientation.

Multilateral cooperation between business, politics and society is essential in order to guarantee sustainable technological progress based on standards geared to the common good. After all, artificial intelligence holds the greatest potential of our time - for economic growth, health research, the environment and our everyday lives.

 

Programming artificial intelligence

From an IT perspective, the programming of AI-based software technologies is a kind of supreme discipline. However, the tools that are helpful for developing Artificial Intelligence are in many cases freely available to the public. The easy-to-read programming language Python is of particular relevance; Their programming libraries, which are freely accessible on the web, enable the evaluation of large amounts of data and are therefore predestined for machine learning. Professional developers trust Tensorflow, an end-to-end open source platform for machine learning, which is also behind Google applications, among other things. Tensorflow was originally programmed by the Google Brain team for internal purposes, but was later published freely accessible under an Apache-2 open source license.

 

Swell:

 

More lexicon articles that might interest you: