Decoding the Black Box in AIBy CIOAdvisor Apac | Friday, November 30, 2018
Artificial Intelligence (AI) has taken the market by storm ever since it came into being. Despite this, the monopolizing “Black Box” continues to be the apple of discord. The audience refuses to repose trust and faith unless they are aware of what goes on in the making of AI, and how it functions. If the audience doubts the veracity of technology put to use, then AI will end up losing its stronghold owing to the mistrust which can be attributed to the futile efforts put forth by IBM, Watson, and Oncology.
Slowly and steadily AI has crept into our daily lives—self-driven cars, medical treatment, defense weaponry are constantly used without a second thought, but the doubt lingers in some remote corner of the mind regarding the mechanism that elevates its smooth functioning. This can be credited to the algorithms.
A complicated network of interconnected neurons or neural networks that look similar to the topography of a human brain, which merge to perform the tasks hidden under the hood of the “black box” that is adaptive so that it can adjust its outputs. The implementation of AI has gained momentum of late. The algorithms in the Black-box are modified so as to suit the need of the hour that is met with the implementation of AI.
Neural networks can recognize impressions among data that would be too complicated or time-consuming for human research. Consequently delivering results manually would be too complex to code using age-old programming tools.
However, it cannot be denied that AI has made life easy and it saves time. “There's a world of problems for which deep learning is just not the answer,” says Zoubin Ghahramani, a machine-learning researcher at the University of Cambridge, UK.