The potential and limitations of artificial intelligence

Everyone is excited about artificial intelligence. Great advances have been made in the technology and technique of machine learning. However, at this early stage of its development, we may need to curb our enthusiasm a bit.

The value of AI can already be seen in a wide range of trades, including marketing and sales, business operations, insurance, banking and finance, and more. In short, it’s an ideal way to do a wide range of business activities, from managing human capital and analyzing people performance to recruiting and more. Its potential runs through the common thread of the entire business Ecostructure. It is already more than evident that the value of AI for the entire economy can be worth trillions of dollars.

Sometimes we can forget that AI is still an act in progress. Due to its infancy, there are still limitations in the technology that need to be overcome before we are truly in the brave new world of AI.

In a recent podcast published by the McKinsey Global Institute, a firm that analyzes the global economy, Michael Chui, president of the company, and James Manyika, director, discussed what the limitations of AI are and what is being done to alleviate them.

Factors that limit the potential of AI

Manyika noted that the AI’s limitations are “purely technical.” He identified them as how to explain what the algorithm is doing? Why do you make the decisions, outcomes, and forecasts that you do? Then there are the practical limitations involved in the data as well as its use.

He explained that in the learning process, we are giving data to computers not only to program them, but also to train them. “We’re teaching them,” she said. They are trained by providing them with labeled data. Teaching a machine to identify objects in a photograph or to recognize a variation in a data stream that may indicate that a machine is going to crash is done by providing it with a large amount of labeled data that indicates that in this batch of data the machine is about to fail. to break and in that data collection the machine is not about to break and the computer determines if a machine is about to break.

Chui identified five AI limitations that need to be overcome. He explained that now humans are tagging the data. For example, people review photos of traffic and track cars and lane markers to create tagged data that self-driving cars can use to create the algorithm needed to drive the cars.

Manyika noted that she knows of students who go to a public library to tag art so they can create algorithms that the computer uses to make predictions. For example, in the UK, groups of people are identifying photos of different breeds of dogs, using tagged data that is used to create algorithms so that the computer can identify the data and know what it is.

This process is being used for medical purposes, he noted. People are labeling pictures of different types of tumors so that when a computer scans them, it can understand what a tumor is and what type of tumor it is.

The problem is that an excessive amount of data is needed to teach the computer. The challenge is to create a way for the computer to review the tagged data faster.

Tools now used to do that include generative adversarial networks (GANs). The tools use two networks: one generates the correct things and the other distinguishes whether the computer is generating the correct things. The two networks compete with each other to allow the computer to do the right thing. This technique allows a computer to generate art in the style of a particular artist or to generate architecture in the style of other things that have been observed.

Manyika noted that people are currently experimenting with other machine learning techniques. For example, she said that researchers at the Microsoft Research Lab are developing sequence labeling, a process that labels data through use. In other words, the computer is trying to interpret the data based on how it is being used. Although stream labeling has been around for a while, it has recently made great strides. Still, according to Manyika, data labeling is a limitation that needs further development.

Another limitation of AI is insufficient data. To combat the problem, companies developing AI acquire data over several years. To try to reduce the amount of time to collect data, companies turn to simulated environments. Creating a simulated environment inside a computer allows you to run more tests so the computer can learn a lot more things faster.

Then there is the problem of explaining why the computer decided what it did. Known as explainability, the problem relates to regulations and regulators who can investigate an algorithm’s decision. For example, if someone got out of jail on bail and someone else didn’t, someone will want to know why. One could try to explain the decision, but it will certainly be difficult.

Chui explained that a technique is being developed that may provide the explanation. Called LIME, which stands for Locally Interpretable Model-Agnostic Explanation, it involves looking at parts of a model and inputs and seeing if that alters the result. For example, if you’re looking at a photo and trying to determine if the item in the photo is a truck or a car, then if the windshield of the truck or the rear of the car is changed, then either of those things? changes make a difference. That shows that the model focuses on the back of the car or the windshield of the truck to make a decision. What is happening is that experiments are being performed on the model to determine what makes the difference.

Finally, biased data is also a limitation for AI. If the data going into the computer is biased, then the output is also biased. For example, we know that some communities are subject to more police presence than other communities. If the computer is to determine whether a large number of police in a community limits crime, and the data comes from the neighborhood with a heavy police presence and from a neighborhood with little or no police presence, then the computer’s decision is based on more data from the neighborhood. with police and not if there is information about the neighborhood that does not have police. The oversampled neighborhood can cause a biased conclusion. Therefore, trust in AI can result in trust in the bias inherent in the data. The challenge, therefore, is to find a way to “de-bias” the data.

So as we can see the potential of AI, we must also recognize its limitations. Do not transport; AI researchers are working feverishly on the problems. Some things that were considered limitations of AI a few years ago are not today due to its rapid development. That is why you should constantly check with AI researchers what is possible today.

Leave a Reply

Your email address will not be published. Required fields are marked *