My main focus in Artificial Intelligence is using Python libraries like PyTorch, TensorFlow, and Gensim library for word embeddings in using Word2Veq and seq2seq algorithms.
The amount and variety of data gathered around the world will continue to grow by leaps and bounds, as will the power and sophistication of the computers and algorithms used to analyze it all.
First, while machine learning can be applied to just about any domain of knowledge, its methods are most applicable to significantly narrower and more specialized problems than those that humans are capable of handling, and there are many tasks for which machine learning is not effective. In particular, as we’re frequently reminded, correlation does not imply causation.
Machine learning is a statistical modeling technique, like data mining and business analytics. It finds and correlates patterns between inputs and outputs without necessarily capturing their cause-and-effect relationships. It excels at solving problems in which a wide range of potential inputs must be mapped onto a limited number of outputs; large data sets are available for training the algorithms; and the problems to be solved closely resemble those represented in the training data, e.g., image and speech recognition, language translation. But deviations from these assumptions can lead to poor results. This is clearly the case when attempting to apply machine learning to highly complex and open-ended problems like markets and human behavior.
The second major challenge is that machine learning can serve as a magnifier for existing errors and biases in the data. Garbage in, garbage out applies as much to AI today as it has to computing since its early years. Given that AI algorithms are trained using the vast amounts of data collected over the years, if the data include past racial, gender or other biases, the predictions of these AI algorithms will reflect these biases.
When this data is then used to make decisions that may plausibly reinforce those processes (by singling e.g. particular groups that are regarded as problematic out for particular police attention, leading them to be more liable to be arrested and so on), the bias may feed upon itself,” Mr. Farrell writes.
In more open, free-market democratic societies there will always be ways for people to point out and counteract these biases, he says, but in more centrally managed, autocratic societies the correction tendencies will be weaker.
In short, there is a very plausible set of mechanisms under which machine learning and related techniques may turn out to be a disaster for authoritarianism, reinforcing its weaknesses rather than its strengths, by increasing its tendency to bad decision making, and reducing further the possibility of negative feedback that could help correct against errors.