top of page

Wisdom versus Data: Something to Consider in the World of AI



We are all familiar with the Data Pyramid, but I want to give you a different interpretation on it. Especially, as it is applied today in the world of AI. AI needs huge amounts of Data, especially if it is using Big Data Systems like an LLM or GenAI (you don't need to, but that itself is a decision choice).


What I am accutely worried about is the fact that by removing the human-in-the-loop, we are not giving enough space for wisdom. Most of us are familair with the above diagram. Data is at the bottom. How do we get to the top of the pyramid?


First of all, data by itself is useless if you cannot make sense of it and use it for decision making. In some cases, data visualization helps but not if the data is bad or does not fit the decision problem and the boundary constraints. So the challenge we face in collecting correct data is a fundamental one. Why? If the data is about people and their motivations, it will always be imperfect data as we are not bound by the neat scientific logic of science. Hence, generalizations will always have outliers - and should it matter? Yes, if the service needs to be inclusive and fair.


Second, what we capture may not really be correct - we use proxies (like measuring happiness, well being and quality of life). Hence, there are huge assumptions that need to be constantly tested, but, if the documents do not have these assumptions underlying the models (and its NOT what M/L people call weights), this is an issue. I am not in favour of black box models applied to humans but I do think that with narrow data it works well and risks can be minimized (like with Alpha Fold).


Assuming you did get to the information stage, it needs to be converted to knowledge. Sadly Chat GPT cannot help you get to this stage, as this needs expertise (you need to know how to apply it). Think of a robot doctor with ten minutes on Chat GPT (which has amplified data from other people's expertise) but none of its own...what could it miss? Hence, the due diligence process of learning is critical and the education system needs to reflect this. A doctor with expertise could use Chat GPT as a team member, better than someone with no expertise working on people and only expertise in virtual simulations. have our policies considered this? Here, we see that field context for experience is very different. Hence, for example, defense officials have continued (or prolonged wars) for the need to continue to get real world data on autonomous weapons in field conditions. In expertise, context matters, so we see that using drones for example in Ukraine is very different when using it in the Red Sea (see this article).


Assuming you do get expertise and knowledge then comes the next big challenge. Where does wisdom come from? It comes from experience and experience is long-term and reflective. Think of the breakdown in Boeing where wisdom was not used and the team decided to put safety in the backburner despite conventional wisdom. Or the fact that we continue to build data centres or invest in the "cloud" which are contributing not just to carbon emissions that are equal to or more than the airline industry but water shortage and ecosystem contamination?


Here is another example: We can use fancy tools like AI to monitor weather, but some of the solutions for tsunamis and hurricanes has been ancestral knowledge like growing mangroves, sea grass (Maldives), or vetiver grass (in Trinadad and Tobago, an introduction from India). Sadly with AI, we have not made place for wisdom. It comes from having human-in-the-loop and acknowledging all knowledge - even that which is indigenous and unwritten is important (great report from IEEE here Planet Positive 2030 - need your feedback). Remember the reflection part of wisdom - though we apply knowledge through experience, we realise it needs reflection (it is not an end to itself, rather a perpetual quest).


With over 40% of the world not connected online and many languages left out of the "internet dialogues", this is a challenge. Much of our knowledge is tacit, unwritten so the way we can get knowledge is through conversations, that too diverse ones. Does your AI system allow for that kind of decision making? Or does it assume, based on its limited data, what works? If you adopting a system, did you get feedback from those who will be impacted by it, and are responsible for it? If the task is painting a car, it is very different than treating a human for mental challenges. Should we treat these two tasks differently and if so, what care should we take in building AI systems?


Also we need to ask the question - do we need AI for the sake of AI? Would common sense or dialogues work? If the policy is broken can we change it?


Deciding when to use AI and when not to use it will become the most fundamental decision we will have to take. And it will be our wisdom in this choice that will guide future generations.

If you want to know more about taking decision with AI - read my latest book (available on Amazon): AI Enabled Business: A Smart Decision Kit.



Comments


bottom of page