Each year, at about this time, many analysts in the field of computing put out lists of predictions for the coming year. As anticipated as Santa’s arrival, many of us enjoy the ritual of reading up on the (no.??) most likely events/trends of the new year. The rationale is a sound one: some analysts are privy to a “behind the scene” look at many technologies in the works and, supported by a relevant look at the accompanying data, come up with some educated deductions on what to expect for the coming year.
Sounds logical enough! After all, many industries indulge in the same ritual-like behaviour at this time of the year. Take, for example, marketing, finance, pharma, healthcare, and so on; highly sophisticated predictive models identify patterns in historical and transactional data to identify risks and opportunities. (Wiki) The science of predictive analytics analyzes this information to make predictions about future events.
So far so good! As more and more organizations adopt some form of predictive analysis into decision-making processes, there is a viable shift in the marketplace toward small and medium-sized businesses becoming the primary consumers of this information.
Well – are there any caveats to be on the lookout for? Lets start by asking ourselves some very basic questions. How do we differentiate between ‘good’ and ‘bad’ information? (or less simplistically, between information that is supported by a sound analytical process and information that isn’t). Nate Silver notes in his book, The Signal and The Noise, “The volume of information is increasing exponentially. But relatively little of this information is useful…we need better ways of distinguishing between the two”. Silver argues, quite correctly, that not all information can be given equal merit. Sounds simple enough to do! But how many of us have the time, patience, and skill required to wade through a saturated information environment that is more concerned with volume than quality and accuracy? Therefore, can ‘incorrect’ predictions be misleading or at best, confusing, to some consumers of this information without any kind of due diligence?
And what of the question of human hubris? Does this exercise encourage or promote a sort of ‘race for the top’ of so-called experts in the field? Is there an unintended pressure on industry gurus to shine and take their rightful place at the top of the heap? And what better way to do so than to make the next great prediction. Philip Bereano of the University of Washington argues, ‘technologies are shaped by social, economic, political and cultural phenomena, making the prediction of technologies a very volatile exercise’. So making predictions is not only complex and difficult to carry out, but, by its very nature, might encourage a cultish rise of individuals racing for top recognition in their field.
So, as part of the traditional, seasonal indulgence of ‘treats’, our best advice on this subject is not to consume too many ‘treats’ without some forethought. Living in this hyper rich environment of information (and misinformation), to survive, we need to become discriminating consumers of that information and learn to differentiate between that which is credible and valuable versus what is best ignored.