Planning for the placement and staffing of fire apparatus, either in a fixed location or for a temporary move-up position, involves the comparative evaluation of community risk for each alternative. Unfortunately, our typical understanding of risk is skewed and outdated. Basing operational decisions on inadequate data leads to choices that can be inefficient, ineffective and legally indefensible.
Of course, there are many factors that combine to influence the danger of a fire response. There must be some estimate of fuel load along with the exposures and barriers to a potential fire spread. For the most part, existing studies get this right – even if only rudimentarily. But it is the most significant single impact on fire frequency that is modeled the poorest. Kasischke and Turetsky stated in 2006 that “(people) are the dominant source of ignitions except in sparsely populated regions.” Our troubled standard for measuring population is the decennial US census. Prior to the twenty-first century, these federal statistics were clearly the most consistent available figures that were widely accessible.
Census population data, which is often the basis of many comprehensive fire plans, have several logical failures for their use in local community risk evaluation. The first problem is the age of the data. The census is taken only every ten years and the values of intervening years are estimated through algorithms. At this present point in time, the 2010 population estimates have been statistically massaged for the past 7 years. Add to that, the fact that the census only counts “night-time” populations by estimating where individuals “live” (or spend the majority of their sleeping time) rather than accounting for their patterns of movement outside of the home. The time away from their census-defined abode can often be the better part of each 24 hour period, yet the nineteenth century agrarian idea of home is the value most studies use to consider the number of humans at risk in an area. Still another major problem is the aggregation level of these population estimates. The census ‘block group‘ is the smallest numerical unit that the US Census Bureau reports to the public. By definition, the block group typically consists of a neighborhood of between 600 and 3,000 individuals where estimates of its values are extrapolated through reports from a representative fraction of the area. Finally, in a 2015 study on population density modelling in support of disaster risk assessment, the authors conclude that “block groups are not fine enough to be suitable for specific hazard analysis.” While many planners attempt to break down these manipulated night-time population estimates by factoring a simple percentage of an area, there is no statistical support for such assumptions. In fact, the foundation of the referenced work by Tenerelli, et. al. describes specific ‘downscaling techniques’ using intensive proxy attributes to give clues for any justifiable disaggregation of coarse population statistics. Most of these techniques are far more involved than percentages and have value only when no other population measure is present.
Today, the near real-time visualization of population surges that quantify the urban influxes at the start of the work day and their subsequent retreat into suburbia for the evening are becoming a reality. Dynamic population movement can now be mapped using anonymized mobile phone data. According to a 2017 Pew Research Center Fact Sheet, it is estimated that “95% of Americans own a cell phone of some kind” (and well over 75% have devices that are classified as “smartphones”.) Since every one of these devices must regularly ‘ping’ a tower in the cellular network, these signals open bold new opportunities for tracking, visualizing and even analyzing population movement forming an important layer in the dynamic risk of any community with a fidelity far greater than the census block group.
Generic population measures are a great start, but not all people are similar when factoring risk. Some populations are more vulnerable than others. Families that live in flood zones, for instance, have a greater exposure for both life and property loss during heavy rain events. Those who live in large housing complexes with limited egress may also be unfairly disadvantaged during a significant event that requires evacuation. Socioeconomic factors can also limit access to current information or an individual’s ability to react to it. Beyond raw numbers of bodies, we must be able to classify groupings of individuals and label their vulnerability.
There are many other sensors in a community that can also be leveraged in modelling the dynamic nature of risk. The risk for flooding is dependent on a source of water input. Rain gauges within your watershed can define the amount of water added over a measure of time. Stream gauges measure the depth of water in a channel and can inform you of the likelihood of imminent flooding. Increasingly, these sensors are becoming part of the Internet of Things (IoT) that allow remote access of real-time data. Even layers of data that are often considered to be static can have variability capable of being modeled. A school, for instance, is usually categorized as a ‘high risk’ asset, but is it always at the same risk level? The actual risk experienced is far lower during summer months or on weekend evenings. Conversely, its risk status may go even higher than normal on certain Friday evenings when the home team is playing a championship game and entire families gather in addition to the normal student population. Similar to pre-plan floor layouts or construction analysis, the use patterns of a building can be noted and input to a dynamic risk model. The increased effort of data collection should be more than repaid by the acute knowledge gained for steering protection decisions.
The reason we do not make more effort to realistically model the threat to our communities is not because it is difficult, but because we simply have never done it that way before. The technology to visualize changing demand and automate recommendations for responding to it has long been proven in the EMS world. The rebuttal is often that the fire service is different. However, simple modifications of existing software provide mobile access to risk as a spatial surface of probability on a user-selected basemap of imagery, topography, or cadastre for incident management or support in apparatus move-up decisions. Modification of the dispatch software to recommend not just the closest ambulance but the most appropriate response package of apparatus based on incident reporting is also being made. The Mobile Area Routing and Vehicle Location Information System™ (MARVLIS) by BCS is leading the movement to change the management of fire apparatus, not just as another point solution, but a significant new platform for visualizing your community and better protecting it.
“Risk” is defined in the Business Dictionary as “the probability or threat of damage, injury, liability, loss, or other negative occurrence.” The threats that face any neighborhood (or fire planning zone) are never constant. We must re-evaluate these time dependent risk factors and re-imagine the information flow used in making decisions that respond to knowing the time-dependent threat. If you only report call history as daily averages, you are ignoring the role that reality plays in your responses. Action as simple as viewing call demand by the 168 hours of each week will provide a clearer image of the routine daily patterns that exist. And these patterns are likely to be different during each season of the year or, at the very least, in comparing the months when school is in session against the months it is not. I recognize commuting changes in my own neighborhood the very day school opens and again on the day after it closes each year. If you can see that too, why are you not making efforts to adjust response potential to these realities?
While public safety is not a traditional ‘business’, it can learn a great deal from business leaders like Warren Buffet who said, “part of making good decisions in business is recognizing the poor decisions you’ve made and why they were poor.” We can do better and that is exactly why we should.