Applications and Challenges of Artificial Intelligence

Artificial intelligence is the new hot topic. Before you race into creating a journey for your organization, it’s important to understand the components or applications associated with AI as well as common challenges organizations are facing as they introduce AI systems. This post quickly defines some of those key concepts you need to understand. First, let’s look at the fundamental components or applications associated with AI systems.

Machine Learning: This term has been around for years and the foundation of AI. It is a subset of AI which the computer models to review large amounts of data and create insights over time. The computer models are trained to make decisions based on traits so they can make decisions without being directly told what to do. This is similar to how humans learn through watching and taking action on things. To build a computer model that will perform machine learning, you need data, a computer model to digest it and your method of training and testing.

Natural Language Processing (NLP): NLP is the ability for computers to understand text and speech IE be able to interoperate what it is being told. Essentially, NLP allows computers to understand how we communicate with them using our own natural language. This is how you can ask AI systems questions in your language of preference and receive back answers in the same language.

Knowledge Mining: AI systems need data to learn. This can be a extremely changing tasks as many organizations don’t have structure of the ability to use around 80% of available data commonly referred to as unstructured data. If that data isn’t able to be used, its essentially worthless hence why in the security industry parsers and other data converting tools have been clutch to any security operation center. Knowledge mining is the ability to pull data from unstructured data making it interpretable by the system. Knowledge mining can also find patterns within data making it a key requirement for most AI systems.

Anomaly Detection: The concept of anomaly detection isn’t new to the security world. I’ve posted about detection capabilities HERE, which I break down into pattern matching known threats (signatures), behavior monitoring (IE looking for known bad stuff), and lastly creating a baseline to identify anomalies. AI takes the anomaly detection concept to a much higher level allowing organizations to better understand unknown threats.

Computer Vision: This is how computers such as automated cars can determine a person from a trashcan. The computer is given pre defined features and learns the worlds through visual data.

These components or “applications” are used to build a AI system. There are also challenges everybody is experiencing as they build AI systems. Fundamentally, its important to understand what these mean and be aware of the risk associated with each.

Hallucinations: It is very rare that a AI system will always be 100% accurate in its findings. When a AI system barfs out a inaccurate outcome, it is coined as a hallucination. An example can be a gaming AI system proposing a situation that doesn’t make sense based on the defined criteria. You ask it to win the game and it tells you to eat an apple. Sometimes hallucinations can be tracked by to the cause while other times they will just happen based on all of the interworking of what trained your system.

Fairness: This continues to be a problem. In short, people (any many laws) want AI systems to be fair however, AI systems want to solve the problem. This can create unfair outcomes. This Ted talk directly addresses concept found HERE. For example, you instruct a system to win super Mario brothers and the AI system decides to make Mario a huge giant that can take one step across the entire map. Unless rules are put in place to deny this, the AI system will figure this is the best way to win. How this impacts fairness is when the AI system doesn’t understand the concept of fair. One very bad situation that is occurring is AL based recruiting systems. These systems are being instructed to find the best people but be fair yet the AI systems are identifying unfair patterns and applying those patterns to how it learns. An example is identifying more males than females in a IT role hence filtering out women … which isn’t fair but to the AI system, is a understandable pattern it learned.

Reliable and Safety: It is important that an AI system is reliable when it is responsible for something important such as protect human life. Let’s say AI is driving our cars. If there is a 5% error rate, that would mean 5% of the time, people could be harmed which isn’t an acceptable situation. One additional challenge is the AI system could be reliable at first but later get bad data and become unreliable. This means this challenge needs to be continuously reviewed.

Privacy and Security: AI needs data and lots of data in the world is protected by law. This creates a challenge when AI systems are digesting larges amounts of data without having concerns of data privacy. You need to think where and how the data is being obtained as well as isolate systems that contain sensitive data.

Inclusiveness: This challenge is all about what you don’t know. You may build a AI system for 95% of the people in the world and later find there is a 5% you didn’t think about. Looking back at the job recruiting AI system, this is the checks and balances to ensure systems are unfair but including everybody that could have the skillsets for the role. This may seem simple to some people, but it can be really challenging because again … its about what you don’t know that should be included.

Transparency: There are many people concerned about the threat of AI to privacy and fairness. To overcome this, transparency is needed to explain how conclusions and behavior is being developed. If your AI system is in a secret black box, people will question what is being used to develop its outcomes. Another subset challenge is trying to be transparent about the use of sensitive data. You can’t expose the data but you can expose its use. This concept can become a slippery slope.

Accountability: When things go wrong, somebody … not the AI system needs to be accountable. This can become a huge challenge when multiple parties are involved with developing and using a AI system. If a hallucination occurs and causes a negative outcome, who is responsible? The programers? The data being used? The user of the AI system? Guidelines need to be created or accountability will become a mess of finger pointing

Once you understand these challenges and fundamental applications, you are ready to learn about developing a AI system. I’ll post about Microsoft’s offering in my next post to help explain the components of a AI system and how you could get started building one. I’ll continue to reference this post regarding the key terms and challenges as they are being addressed.

Leave a Reply

Your email address will not be published. Required fields are marked *

Time limit is exhausted. Please reload CAPTCHA.