Algorithmics as the Basis of Artificial Intelligence
Basically, artificial intelligence is anything that makes a computer behave in a way that appears to be intelligent. That may be a complex program with a large rule set which determines what the computer should output for a given input. However, these days we usually think about machine learning when talking about artificial intelligence. Machine learning is a set of techniques that allow the computer to automatically improve the output it gives with time, when getting lots of different inputs. It basically means that the computation algorithm can be modified by the computer itself and it does so because it learns new input-output patterns.
This however does not change the fact that it is an algorithm that determines what output the computer creates when being presented a given input. So the basis of artificial intelligence is algorithmics.
An algorithm is a kind of computational recipe: it tells the computer what to do with the input it gets and what to eventually output. Usually algorithms contain step-by-step instructions, but they can also contain loops and conditional branches. Basically anything that can be modeled as a Turing machine is an algorithm. Algorithms can be written in informal language, but most of the time they are written in higher programming languages such as Java or C++.
How is it possible that a computer modifies the algorithm it works with? This requires a special representation of the algorithm. A common representation is the neural network. In a neural network the algorithm consists of many artificial neurons which receive input and produce output. By arranging these neurons to a network, complex computations can be performed.
Another way of representing an algorithm in such a way that it can modify itself is the use of a data structure called a tree. A tree consists of nodes which can themselves receive input from other nodes. If you want to compute (x + y), you can represent this as a node that stands for the addition, which receives the values of x and y as input nodes. The computation (z * (x + y)) would then be represented by a multiplication node that receives z as an input as well as the result of the addition of x and y. This tree representation is used by a programming technique called genetic programming, which is not the usual way of machine learning but can be used for the same purpose. In genetic programming, the algorithm is modified by introducing small "mutations", and then the computer checks if the modified algorithm performs better than the original one. In this way it is possible to improve existing algorithms.
Nowadays artificial intelligence programs use thousands, if not even millions of artifical neurons or equally huge trees. The computation is very complex and it is not even possible for the engineer to really understand what is going on. The power of these complex neural networks and trees can be seen in artifical intelligence software such as ChatGPT. We should however not forget that the theoretical foundations of modern artificial intelligence already date back to the 1980s and only recent increases in computational power have made the successes possible we now celebrate.
Claus D. Volko
Comments
Post a Comment