Is it essential to understand the perspective of an Artificial Intelligence model?

Ashwin Balaji
5 min readJan 6, 2021

--

Today, our world is progressing exponentially towards AI as we are surrounded with its applications e.g., in the domains, like Education, Health, Engineering, Media and E-Commerce, Law, Service/Government sector, Transportation etc., because it makes our life effortless and comfortable.

Imagine, a world in which a mobile entity (i.e., Self-driving cars, Unmanned aerial vehicles (UAV)etc., or any hardware-software infrastructures (cloud computing, Internet of things, robotics) or Security/Anomaly-detection (i.e., Data/Internet security, Blockchain applications), Mathematical calculations (like Quantum computing), Natural language processing, human-computer interaction (HCI) etc. explaining us the perception of machines?
We know, humans gain knowledge by applying trial and error techniques to a problem to find a solution over the period to attain an effective solution. This approach is the same for learning machines as well. Below illustration can clearly explain the working of an AI.

Continuous Learning of a machine based on the logic provided by a human

In the above illustration, AI reasoning is termed as “Black Box”. Why?

A developer logically develops an AI model by training the model with gathered knowledge to analyze the data.

The developer never knows what does the machine does to evaluate results i.e., Transparency factors like,

  1. What does the machine do to the black box to give the desired results i.e., what type of reasoning(s)/intelligence does the model apply to a given data “internally”?
  2. When does the machine evaluate the valid reasoning for a particular case in the black box i.e., at what point of time model decides that the logical-reasoning may or may not be valid for this data?
  3. Where does the machine determine that the given reasoning is may/may not valid i.e., at what position during the flow of execution, the machine decides to arrive at a particular conclusion?
  4. Why does the model concluded at the desired/undesired result i.e., what made the machine intelligence result in such a way? Is it, the input or internal logic which decides the output?
  5. And finally, how does the input go through the reasoning model i.e., how does the model takes the decisions?

These are some of the interesting questions we may ask a machine and yes, it should be answerable enough to explain the flow of execution, building transparency between human and a machine, which may increase interaction between the both and also helps developer to control/update/verify a machine on a time to time basis. And this can be done by an approach called XAI or Explainable-AI. That’s what I consider as a human-friendly machine.

Explainable-AI

Some of the real-life examples of XAI:

  1. Imagine a smart surveillance camera in a bank, which captures the day-to-day movement to find out in case of any discrepancy at the bank. Now, a robbery took place in the bank, in which robbers are wearing a black mask. The so-called “smart” surveillance camera captures the video and immediately reports that a group of people of a particular race (or religion, sex, dress-code etc.) may have committed this robbery, which may or may not be the case i.e., the machine has strong incorrect biased opinion towards a race (or religion, sex, dress-code etc.) which is unacceptable. Here the problem is with the machine model and the learned-data. If in this case, suppose we have had a strategy to know the working of the machine, then we can overcome such problems.
  2. Another real-life instance in which a machine (or robot) installed to detect disease and associated conditions. If a machine model provides outputs based on the biased/weighted/inclined factors, then a saturation point attained quickly (w.r.t information)as far as prediction/classification is concerned. Therefore, learning based on inclined factors may lead to incorrect predictions. Moreover, such errors are unacceptable in the life-critical medical/healthcare industry.
  3. An E-Commerce platform integrated with Head-mounted display (HMD) to experience real-time product trial to the customers. We know people get confused about whether the product chosen is suitable for them or not, especially during apparel selection. The machine may interpret the result as yes/no i.e., whether the product suits or not, based on user-profile data (like body ratio, favourite colours, height, weight, cloth type etc.). Customer after virtually trying the product may question AI model (through microphone module connected with the HMD), “Why, do you think this dress is looking good on me ?”, for which machine may reply, (considering appropriate factors based on which it concluded), “I think you should go with the black colour dress instead of green because you already have 10 green coloured dresses and cloth type of black dress is better than the green one”. Such a conversation may escalate customer-experience to whole new another level and machine may store the information for future predictions
  4. A cybersecurity analyst observes an attack on a distributed computing environment. But, the analyst couldn’t understand the source and point of attack even after applying the strong firewall in Virtual-machines (VMs). To understand the vulnerabilities in the system(s), the analyst may command the primary-machine to gather and provide the log-data chart (in an understandable format) to reverse-engineer the attacks and restore the vulnerable-machine(s).
  5. Another example in which a self-driving car collects user-profile like day-to-day routine and tasks of the owner. Sometimes, the owner may forget to do some essential tasks like “family meeting on Thursday 8:00 PM on 14th January 2020”. As soon as the owner enters the vehicle (considering the owner might have forgotten about the meeting), the vehicle may en-route towards the meeting location. The owner may ask a question “Why you are routing to this location”, for which vehicle assistant may reply, “You have a family meeting at 8:00 PM today”. In this way, recorded tasks may be taken care of by the vehicle-assistant.

These examples are more futuristic because XAI technology is in its initial phase. Applications of AI may witness a dip in future when machines become more intelligent and complex than human.

Before ending, I would like to ask a question, are we capable enough to understand the complex explanations of an AI model? This question is important because human decisions are emotionally-driven sometimes irrational and intuitive, whereas machine explanations are logical.

Do let me know your thoughts and feedback about this article.

References:

https://en.wikipedia.org/wiki/Explainable_artificial_intelligence

https://www.darpa.mil/program/explainable-artificial-intelligence

--

--

No responses yet