In the world of disinformation and misinformation there are various degrees ranging from outright lies to propaganda to urban legends to old wive’s tales. At the low end of the scale are what I think of as persistent popular perceptions.
These are ideas that may have at one time held a kernel of truth, but because of changing social, technological and/or economic factors no longer hold true yet people continue to parrot them.
One such persistent perception is that weather forecasters never get it right. We’ve all heard variations on the old joke: “If I was wrong 90 per cent of the time at my job, I’d be fired.”
Certainly, throughout most of human history, even the best forecasts were guesswork based on personal observations and associated folklore. Pretty much anybody who has spent any time on planet Earth can be fairly accurate about what is going to happen weather-wise over the short-term.
For almost two millennia, from the time Aristotle published Meteorologica until the Renaissance, this is how we conducted the business of weather forecasting with all its inherent erroneous assumptions and inaccuracies.
But the atmosphere of Earth is an exceptionally complex system. We needed data. That started coming into play during the 15th to 19th centuries with the invention of instruments such hygrometers, thermometers and barometers. Rapid, long distance communications in the form of the telegraph made it possible for the collection of more data from numerous observers over wide geographical areas, further improving our understanding of atmospheric conditions and their role in predicting weather. Still, prognostication remained more art than science and early broadcast weather forecasts were notoriously inaccurate past a few hours at most.
Not only did we need data, we needed a way to crunch a lot of it really quickly. In 1922, British mathematician Lewis Fry Richardson published a book called Weather Prediction by Numerical Process. It was absolutely brilliant, but, unfortunately, Richardson’s process would have required 64,000 people to do all the calculations required to produce a timely and accurate weather forecast.
By the 1950s, a team at Princeton University was producing successful 24-hour forecasts using computers to do the vast number of calculations required. Still, accuracy seriously attenuated beyond the 24-hour period.
It wasn’t until satellite imaging, supercomputers and sophisticated climate models came into play in the late 20th century that meteorologists could predict with reasonable reliability into the five to seven-day range.
Today, 14-day forecasts are commonplace and although there is still attenuation of accuracy the further out you go, the predicted trends are remarkably pretty darn good even if slight adjustments have to be made from day to day.
Another persistent perception is the old foreign versus domestic vehicle argument.
I still hear people saying, “I would never buy a foreign car because parts are hard to find and repairs are expensive.”
I can certainly remember back in the 1970s when Asian imports started to show up on the North American market that this might have been the case. This was due in large part to the fact those cars and parts were still produced overseas.
The other problem was at that time the imports suffered from poor quality.
These days, however, most Asian brands are made in North America and Asian dealerships are just as plentiful as those of American brands. The cars have also improved to the point that Ford, Chrysler and GM have had to play catch up.
I suspect that the perception may still apply to some premium European brands such as Mercedes, BMW and Porsche, but that, I submit, is more a function of the fact they are expensive to begin with and, honestly, if someone is buying one of these cars the accessibility to parts and cost of repairs is probably not a primary factor in the decision.