I was reading some article online which mentioned the book The Weather Machine - A journey Inside the forecast by Andrew Blum. On a whim, ordered a paper copy, read and enjoyed it quite a bit. Since I was curious/wanted to learn more about how modern weather prediction reports were being put together, this short (less than 200 pages), popular science book scratched that itch nicely.
Narration of the material is mostly in simple chronological order which works for this book. First part titled calculation, talks about the establishment of telegraph lines in the mid 19th century and how it enabled different telegraph offices to exchange basic information about how the weather is in their location instantaneously. Before that point, the importance of weather was in the minds of Govt officials and scientists, as it affected transportation of goods/armies. But there was not much one could do to put together even a rudimentary global picture, since information couldn't be moved faster than weather. Once telegraphing became possible, within couple of decades, International Meteorological Organization was formed and met in Vienna in 1873. Newly formed weather office representatives from about 20 countries met and had started talking about international exchange of weather information. Even before 1900, a Norwegian scientist named Vilhelm Bjerknes develops a model (called circulation theorem) to predict how air at different pressures & altitudes will move around earth's curved surface, that made predicting weather a possibility! But he himself had lamented that (manually) calculating weather patterns of tomorrow using his model may take perhaps a month, rendering it totally useless. But it was certainly a good start! In the early part of 20th century, a British scientist suggested building a huge palace or coliseum type structure filled with 64,000 "human calculators" that would compute the weather, which is a great leap in imagination as it suggested parallel processing the weather information for different parts of the earth.
The second part of the book titled Observation talks about all the different ways in which data is collected today, as there is no weather prediction without observation. It also gets into looking up (to the sky from the surface of the earth) and looking down (from space) and merging the information in 3D, plus time. There still is a lot of manual observations that get reported and conflated. There is also a lot of variation in the quality and type of information collected (simple temperature reported by a human being in an island three times a day, minute by minute temperature, cloud coverage, humidity reported by sensors, satellite data and so on), all of which has to be blended into one flexible/forgiving model that will accept all these variations and will still provide coherent future predictions. Even the quality of data provided by weather satellites can vary widely. He points out a 20-year-old Indian satellite still sending data as it races towards the end of its life Vs. a brand-new Chinese satellite launched just an year back still being prepped to go into full service.
He has traveled to Norway, Reading (England), various parts of the US to personally see offices, installations, large control rooms, satellite launch facilities, interview scientists and so on to collect the needed information, which makes it quite interesting to readers like me. As per his view, though US is contributing a lot both from funds as well as knowledge/data points of view, the bureaucracy that exists in the US is a mess with jumbled up bunch of acronyms that forms multiple departments that don't coordinate or work well. Comparatively the European setup is fantastic and coherent and so currently is the leader in providing the best predictions. This is why in the US TV weather reports we have started hearing about the European Model predictions more often now.
Author's visits to EUMETSAT (European Meteorological Satellite Agency) office in Darmstadt, Germany and to ECMWF (European Center for Medium Range Weather Forecast - located in Reading, England) discussed in the following Part 3 titled Simulation, are the really neat parts of the book. These chapters discuss the satellites that are getting launched routinely to cover the earth to make observations, and how the data is received on earth. The GEO satellites that are located about 36,000 kms above earth, that move in the same speed/direction as earth and so remain on top of the same spot of the earth, provide the big picture but are considered a bit boring since they are designed to "stay" in one place and continue beaming information continuously. But the LEO, lower earth orbit satellites that circle the earth every 90 minutes going from north pole to south pole and then back up to north pole repeatedly, while slowly changing the longitude with each round so that they can cover the entire planet regularly, are considered much more exciting. During that 90-minute cycle, as they fly over Norway, for just about 10, 15 minutes, they dump Gigabytes worth of data they have gathered over the previous 1.5 hours. This is like downloading an entire digital movie using Wi-Fi, into your home computer from a fast-moving car that drives by your home, but it works reliably, consistently day in and day out! As soon as the data comes in, it gets transmitted to supercomputer centers around the world to be assimilated to produce updated reports.
In my Ethics & Emerging Technologies lecture, I talk about ML (Machine Learning) models that take past data, make future prediction and as future data becomes available, keep correcting themselves to adjust any predictions that were made but were not correct, so that next round of predictions will be more accurate. Blum explains this flow being used in weather prediction nicely. Thus, they don't take a set of data, run through computation to predict tomorrow's weather and then start all over again. Instead, they have a continuously evolving model of weather prediction that goes on inside the supercomputers. The actual earth is another "model" that is being observed via all the data that is coming in. The observed data is continuously fed into the supercomputer's predictive model to keep correcting its errors. This has been going on for years now and so currently the European model is able to predict about a week's worth of weather changes ahead! As available computing power, precision and types of observations (e.g., moisture/humidity data in addition to temperature) improves, the Reading center wants to keep pushing the envelope further, predicting 7th, 8th, 9th day more and more precisely. Since this provides a clear, linear measure and there are tons of scientists who come to the Reading office from different countries to test & contribute their ideas to improve the model to make the prediction better and better, for now there is no end to how far they can advance the field. This is truly a global, nation/border less endeavor in science that we should be proud of.
Last part titled Preservations talks about parts of the systems that are getting privatized and what negative impact it may bring about in future. It also discusses how mundane receiving weather reports through our smartphones have become despite the herculean efforts involved in the background, diplomatic conferences, mechanics/funding of international efforts and organizations (as per Blum, US funds 20% of the entire world's efforts but also acts arrogantly by dissing down the annual meetings) that rounds off the story.
As I had mentioned in other reviews, I am still puzzled as to why books don't include a ton of photos and illustrations, as this one screams for visuals but has absolutely none. I had tried to look up the reasons online and came across two recurring themes.
1. In the years past, adding pictures to books was an expensive proposition and so they were not added to keep production costs low. In parallel, since children's books had lot of pictures, a bit of snobbery developed among authors/readers of non-fiction books that carried the notion that serious adult books shouldn't have pictures.
2. Fiction writers preferred to paint word pictures so that there is a bit of latitude allowed for readers to imagine what is described in their minds as they prefer, rather than locking down the visuals in precise illustrations. (Just as a side note, there is century long tradition of illustrating stories and novels in Tamil magazines I grew up with, that continues to this day in India.)
I am not able to buy either one of these two arguments for non-fiction books I read. Even in the one book I published, I did include a bunch of pictures that I truly think helped explain things better and the cost was including pictures during the printing process is nothing. Only reason I can accept/understand is that getting permissions and approvals costs a lot of time and money and so authors avoid them. But I wish this trend would change. This book could have used a lot of pictures since there are lot of descriptions of buildings, satellites, people the author had personally visited. He could have just snapped photos with his cell phone and included them in the book (which would have mostly eliminated the getting permission hurdle). Perhaps there are other reasons I am not aware of.