I Don’t Trust AI Algorithms yet Either, but Would you if They Could Self – Optimize?

Author: Nathaniel Bischoff

As new information is compiled, you are constantly learning more about your field; why shouldn’t you expect the same out of AI too? One of the issues of model training or Artificial Intelligence is once a model is built for a certain data set there is little wiggle room as to how the algorithm adapts to new variables and data points. With the data explosion from genomic data to real time monitoring in portable medical devices including our phones, there are a lot more parameters (measure variables and data points). How do our current algorithms adapt? If Data Scientists want to refine a model, they have to essentially retrain the original model to fit the ever growing amount of data we can accumulate on a patient or a group of patients. What if our models could continuously get better, essentially becoming smarter with more “studying” of more data points? In 2016, most data scientists are using basic tool and algorithms that are outdated to do analysis on a data set, and if physicians were to ask the same question – say even a few months later, the problem would need to be redone because of the new data taken. It would be like rereading a new edition of a textbook to study for the same material after reading an older edition. It would not be an efficient use of time or space. Allowing our models to analyze data both in a supervised way – essentially using the old data schema and outcome variables, and an unsupervised way including the new data points to build a whole new, more accurate model we can begin to help our algorithms learn. This continual optimization would be the model gaining intelligence and insight into the problem at hand. In the code for the new model, there would be a model checking sequence to make sure the new model is really more accurate than the old one. Some areas of exploration would be the tradeoff of computational power and space that would be taken up from this new model building application. This would be expensive, but could be offset as you would not need a data scientist to oversee the optimization. This would be beneficial for the clinician as the model would be constantly getting better, and trustworthy as a support tool. The medical staff must be willing to trust algorithms to assist in clinical support to improve diagnosis and outcomes, and such automated learning would go a long way in improving the trust.