Gap Year Programs and the Science of Risk Management

Safety Theories and Models for Managing Risk at Gap Year Programs


Ariel Newman had just graduated from Yeshiva University High School for Boys in New York City. He had plans to study at SUNY Binghamton. But before heading to university, Ariel enrolled in a gap year program in Israel, for a year of experiential education, adventure and discovery.


On September 3, 2014, Ariel traveled to the gap year provider’s location, south of Jerusalem. He began a program that included a "boot camp" style regimen including running and navigation. Six days later, he and his group set out on a two-day hike in the Judean desert.


By two pm on the second day of the trek, the temperature was reportedly 95 degrees. After hours of hiking, Ariel collapsed in the desert heat. He was transported to the hospital, where his body core temperature reportedly was 43 degrees Celsius (109.4 F). He was pronounced dead of heatstroke.


Ariel, the only child of Mark and Ellen Newman, was 18 years old.


Ariel Newman, with his parents Mark and Ellen.



This is the kind of tragedy that gap year leaders hope will never come to pass: unexpected, certainly undesired. How could this have happened?


The gap year program had been running for several years. And Israel is a high-income country with access to a wide variety of safety resources. Masa Israel Journey, a nonprofit organization funded by the government of Israel, sets voluntary safety standards for gap year and other international programs operating in Israel.


The Society for the Protection of Nature in Israel, through its Moked Teva service, provides detailed and specialized safety support for Israeli outdoor excursions.


A licensing system—including a two-year training—for tour guides in Israel includes safety education for outdoor guides.


And yet, Ariel’s untimely and tragic death occurred in the desert heat one sunny September afternoon.


We know that safety incidents will occur with gap year organizations, and at other outdoor, experiential, and trip-and-travel programs. What we can’t predict with certainty, however, is what kind of incident will occur, or when, or where, or who will be involved.


It’s this unpredictability that poses a challenge to leaders of outdoor and experiential programs. How do we anticipate the unexpected? How do we guard against unforeseen breakdowns in our safety system—full of policies, procedures, and documentation designed to prevent mishaps from occurring?


Gap year programs aren’t the only organizations to struggle with preventing safety incidents. Airlines strive to avoid plane crashes. Nuclear power plant operators work to prevent meltdowns. Hospitals seek to eliminate wrong-limb surgical amputations.


Aviation, power generation, healthcare and other large industries have invested heavily in researching why incidents occur—and by extension, how they can be prevented. They have funded research scientists to conduct investigations, develop theories of incident causation, and establish models that represent those incident causation theories. There are academic journals, conferences, and an ever-growing literature in the field of risk management.


Gap year and other travel and adventure programs can learn from the work that springs from these investments in advancing safety science. Just as the highest-quality gap year programs pay attention to the best thinking in experiential learning, youth development, and pedagogical design, gap year programs can benefit greatly from keeping abreast of the best thinking in safety science across industries, and applying cutting-edge risk management theories and models to help gap year participants have extraordinary educational adventures with good safety outcomes.



Let’s take a look at safety thinking, and the risk management theories and models that have evolved over time. We’ll explore how safety science has advanced over the last 100 years. And we’ll examine how the most current thinking in risk management—revolving around the idea of complex sociotechnical systems—can be applied to improve safety outcomes at gap year programs.


The field of risk management includes career specialists in safety science, a wide variety of theories and models, numerous academic journals, and PhD programs in risk management. From this, best practices have evolved that can be applied across industries—from aviation to international travel.

A variety of academic journals on safety and risk management exist.


The Evolution of Safety Thinking: Four Ages


Let’s begin by briefly considering the trajectory of safety science from the Industrial Revolution to the present day.


The evolution of safety thinking can be broken down into several eras, each representing a distinct approach to understanding why incidents occur, and how they might be prevented. The model below illustrates four separate eras of safety thinking:


  • The Age of Technology,

  • The Age of Human Factors,

  • The Age of Safety Management, and

  • The Age of Systems Thinking.



The Age of Technology


In this model, adapted from Waterson et al., we see the 1800’s version of safety thinking as a mechanistic model. The predominant understanding of incident causation was a linear one—the “domino model”—where incidents were seen as resulting from a chain of events.


This linear chain-of-causation thinking is exemplified in the following 13th century nursery rhyme:


For want of a nail the horseshoe was lost.

For want of a horseshoe the horse was lost.

For want of a horse the rider was lost.

For want of a rider the battle was lost.

For want of a battle the kingdom was lost.



Root Cause Analysis was a core element of safety thinking at this time: if one could only identify the originating cause of the problem (want of a nail, in the example above), then the incident (loss of a kingdom) could be prevented.


The Age of Human Factors


If we fast-forward to a time 50 years ago, we see that human behavior—and specifically, human error—is seen as a major cause of incidents. If we can control people’s actions, why, then we can prevent incidents from occurring!


This “Age of Human Factors” brings detailed policy registers, procedures handbooks, operating manuals, and rulebooks of every sort. Control human behavior—the most significant, yet most unpredictable, element of any safety system—and you control risk. This marks the advent of rules-based safety.


It’s important to note that each step in the history of safety thinking represents a cumulative advance of wisdom regarding how to prevent incidents. The older theories and models are not to be discarded; they are to be built upon. As safety thinking advanced from a mechanistic search for incident causes through Root Cause Analysis, it’s important to recall that Root Cause Analysis can still be useful—but, crucially, more sophisticated and effective tools have been added to the safety manager’s toolkit.


The Age of Safety Management


It didn’t take long, however, for management to recognize the fact that—surprise!—people don’t always follow the rules. And, rules cannot be invented to address every conceivable situation, every possible permutation of circumstances where risk factors appear.


We then see, in more or less the 1980’s, the evolution of a recognition that the use of procedures and inflexible rules has to be balanced with allowing people to use their good judgement, and to adapt dynamically to a constantly changing risk environment.


This is the birth of “Integrated Safety Culture”—combining rules-based safety, which provides useful guidance to support wise decision-making in times of stress—with the flexibility for individuals to make their own decisions, even if that means not following the documented procedures or the pre-existing plan.


The Age of Systems Thinking


Nuclear power plants are big, complicated things. They have lots of mechanical components, and are operated and maintained by large teams of personnel. Although much attention is put towards their safe operation, dangerous meltdowns continue to occur—the Three Mile Island reactor partial meltdown in 1979, the Chernobyl disaster in 1986, and the Fukushima Daiichi nuclear disaster in 2011.


Damage to No. 3 reactor building at Fukushima Daiichi nuclear power plant, March 2011


It became clear that despite detailed engineering systems, extensive personnel training and oversight, and many other safety measures, managers seemed simply unable to understand and control the enormous complexity of a nuclear generating station. The system was too complex. The safety models that were in place to prevent meltdowns simply weren’t 100 percent effective. A new, more sophisticated model of incident causation, that could account for the complex mix of people and technology, was needed.


This led to the development of complex sociotechnical systems theory.


Complex sociotechnical systems theory combines a recognition of the profound complexity of “systems”—whether they be a nuclear power plant or a gap year program. It attempts to understand how people and their behavior influence safety, and how technology—from pressure release valves in a reactor, to medical protocols on a desert hiking trip—influence safety outcomes. And it seeks to understand the interaction of people—the “socio-”—with the technologies and items they interact with—the “technical”—within a system that also has outside influences and is constantly in flux.


Systems thinking—the application of complex sociotechnical systems theory—represents the most current and most advanced approach to risk management. It is, however, more abstract and challenging to understand than simpler, albeit less effective models. It’s therefore important to invest in understanding what complex STS theory means, and how it can be applied to the gap year setting.


One of the principal ideas of systems thinking is the recognition that we cannot have full awareness of, let alone control of, the complex system of an airplane, a hospital operating room, or a gap year program. We therefore need to build in extra safeguards and capacities so that when an inevitable breakdown in our safety system occurs, the system is resilient enough to withstand that breakdown without catastrophic failure.


This has been termed “resilience engineering,” and is a fundamental approach to applying systems thinking to safety in the travel and experiential education contexts. We'll further examine the resilience engineering concept, as it applies to gap year safety, shortly.


The Evolution of Safety Thinking: Incident Causation


Let’s continue exploring how ideas of risk management have evolved over the decades. But this time we'll look at the ways in which thinking around how incidents occur has become more sophisticated, and an increasingly accurate representation of the factors that lead towards a mishap's occurrence.


The Single-Cause Incident Concept: A Simple Linear Model


The idea of what causes an incident—on a gap year program, or anywhere—was in the past considered to be due to a single causal element. The boots fit poorly, and thus caused the blister. The blister popped, which caused the infection. The infection got worse, so the trekker ended up in the hospital. The root cause: ill-fitting boots. The sequence: a linear one, from root cause leading to an unanticipated mishap, leading to an injury or other loss.


In the image below, from the Safety Institute of Australia, building off the work of Hollnagel, we see this illustrated as the “single cause” principle of causation, which is part of a simple linear model of how incidents occur. The chain of causation is a simple linear sequence.



This idea gained popularity in 1931, when Herbert Heinrich published the first edition of his influential book, Industrial Accident Prevention.


Heinrich used a sequence of falling dominos in his text to show how an accident came about:


Credit: Industrial Accident Prevention



Simply eliminate one step in the chain, and voila! No accident:


Credit: Industrial Accident Prevention



Another simplistic, linear-style model is the Fault Tree Analysis. The Fishbone Diagram is one example.


Here we see all the factors that came together to lead to a gap year program participant slipping and falling on a trail. The hiking guide was naïve and inattentive; the culture on the trip was "shut up and keep hiking;" the trail was slick and ill-maintained, and the gap year participant’s sneakers provided insufficient traction.



The Multiple-Causes Incident Concept: A Complex Linear Model


Later, it became increasingly clear that multiple factors were involved in causing an incident. An event occurred—a person went on a hike wearing too-small boots. But that doesn’t necessarily lead to an infected blister. Perhaps the trip leader asks hikers to check for hot spots. Or the gap year program instructs participants to break in their boots before commencing their gap year expedition, during which time the poor fit could be discovered and rectified.


But if the trip leaders are not well-trained and proactive about safety, and if the gap year program does not provide a detailed gear list with instructions well in advance of the gap year experience, these “latent conditions” can combine with the event—the inadequate footwear—to cause an incident.


This is the “epidemiological” model. It features one or more events, plus one or more latent conditions. The “epidemiological” term references disease transmission modeling, where, for example, a person ventures into the forest in search of wild game (the event), and encounters an animal such as a bat or civet cat that harbors a pathogen (the disease reservoir). The person then comes back into a populated area, leading to an outbreak or epidemic of disease.


This incident model is still a relatively simplistic, linear model, but it also was one of the first to represent incidents as happening within a system of elements.


The epidemiological model gained prominence in 1990, after James Reason published a paper on the topic in the Philosophical Transactions of the Royal Society.


Reason described risk management systems as a series of barriers and defenses. If a hazard were able to get past each of the barriers and defenses by finding a way through the holes in those obstacles, then an incident would occur. Only when all the conditions lined up right would the hazard successfully pass the obstacles and cause an incident.

Reason’s conception, with the easy-to-remember name “Swiss cheese model”



This model, while being superseded by complex systems models that more accurately represent incident causation, uses evocative symbolism and is still in the public consciousness, being cited in the New York Times in August 2021 on COVID-19 safety.


Incident Causation as Taking Place within a Complex System


Finally, risk management theoreticians arrived at what represents the current best thinking in incident causation: the complex systems model.


Here, a complex and ever-changing array of social and technological factors interact in impossible-to-predict ways, leading to an incident. This is the idea of complex sociotechnical systems, as applied to risk management.


Examples of complex systems include the global climate crisis; issues of diversity, equity, and inclusion; and gap year programs.


Examples of complex socio-technical systems



Complex systems are characterized by:


  • Difficulty in achieving widely shared recognition that a problem even exists, and agreeing on a shared definition of the problem

  • Difficulty identifying all the specific factors that influence the problem

  • Limited or no influence or control over some causal elements of the problem

  • Uncertainty about the impacts of specific interventions

  • Incomplete information about the causes of the problem and the effectiveness of potential solutions

  • A constantly shifting landscape where the nature of the problem itself and potential solutions are always changing


This model is the most accurate we have to date. However, it’s also the most difficult to conceptualize and work with.


A variety of terms have been used by safety specialists to describe complex STS theory applied to risk management: Safety Differently, Safety-II, Resilience Engineering, Guided Adaptability, and High Reliability Organizations, among others.

Books exploring risk management through complex STS theory



A panoply of terms has been employed in efforts to impose order and structure on the idea of complex systems:



Perhaps the best-known model, however, is the “AcciMap” approach, developed by the Danish professor Jens Rasmussen, whose pioneering work in nuclear safety has been adapted for experiential/adventure travel and other contexts.


Rasmussen saw different levels at which safety could be influenced:


  • Government, which can pass and enforce safety laws;

  • Regulators and industry associations, such as Masa Israel Journey or the Gap Year Association, which can establish detailed standards;

  • Organizations, like individual gap year provider companies, which can establish sound operating policies to manage risk;

  • Managers, such as gap year program directors, who can develop work plans that incorporate good safety planning;

  • Line staff, for example gap year trip leaders, who perform day-to-day activities with prudence and due care, and

  • Work tasks, such as running a rock climbing site, which have been designed to have minimal inherent risks.


AcciMap adapted from: Risk Management In a Dynamic Society: A Modelling Problem. Jens Rasmussen, Safety Science 27/2-3 (1997)



Rasmussen gave the example of a motor vehicle accident in which a tanker truck rolled, spilling its contents and polluting a water supply. The analysis identified causal factors at all levels--government, regulators/associations, the transportation company, personnel, and work tasks--that contributed to the incident.


Rasmussen’s AcciMap of a motor vehicle accident leading to water pollution.



But AcciMap, and the AcciMap variants that have evolved over the years, are far from the only models which seek to represent complex sociotechnical systems theory applied to risk management.


For instance, the Functional Resonance Analysis Method models complex socio-technical systems in an intricate web of interconnecting influences. Primarily used in large industrial applications, it’s less likely to be useful for safety management in the gap year context.


FRAM: Too abstruse for the gap year context


The Risk Domains Model


A model exists, however, that adapts the complex sociotechnical systems elements of AcciMap and similar frameworks, and applies them to the contexts of gap year and related experiential, adventure, wilderness, outdoor and travel programs.


This is the Risk Domains model, pictured below.


Here we can see eight “direct risk domains:”


  • Safety culture

  • Activities & program areas

  • Staff

  • Equipment

  • Participants

  • Subcontractors (vendors/providers)

  • Transportation

  • Business administration


Each of these areas holds certain risks. For example, a homestay location may harbor risks of food-borne illness. The participant domain brings risks of gap year program participants, for instance, who are poorly trained in safety practices, fail to follow safety directions, or who are medically unsuitable for an activity.


In addition, there are four “underlying risk domains:”


  • Government

  • Society

  • Outdoor Industry

  • Business


Here, we see that sound government regulation can support good safety outcomes; a society that values safety and human life encourages good safety practices; industry associations like the Gap Year Association can provide powerful support for good risk management, and large corporations that feel a civic responsibility will not impede the government’s capacity to enforce sensible safety regulation.


Risks in any of these domains can combine to directly or indirectly lead to an incident, as we see illustrated in the web of interconnections between each risk domain and an ultimate incident.


Managing risks within the context of the Risk Domains model has two components.


First, in each risk domain, risks are identified that may apply to an organization.


For example, a gap year provider may recognize that it must intentionally develop a positive safety culture each season with its new crew of field leaders, lest the propensity for adventurous risk-taking inherent in young adults lead to a safety incident.


And gap year program administrators may need to invest in business administration-related protections to secure medical form confidentiality, protect against embezzlement or other theft, and guard against ransomware and other IT risks.


Policies, procedures, values and systems should be instituted to bring the risks that have been identified in each risk domain as potentially present, down to a socially acceptable level.


Policies might include, for example, a rule that safety briefings are held before each activity, or that incident reports are generated after all non-trivial incidents.


Procedures might include the communications systems between a gap year participant and program staff, in case a homestay or placement has problems.


Values might include, for instance, the value that safety is important, and should be taken seriously.


And systems might include medical screening, field leader training, or a system for assessing suitability of subcontractors (providers).


The idea is not to bring risks to zero—that would paralyze any gap year program—but to bring them to a level where, if an incident occurs, then stakeholders (such as parents, newsmedia, and regulators) understand that reasonable precautions were taken against reasonably foreseeable harms, even though an incident did occur, as is inevitably the case from time to time.


Risk Management Instruments


In addition to instituting specific policies, procedures, values and systems to maintain identified risks in all relevant risk domains at a socially acceptable level, there are broad-based tools, or instruments, that can be applied to manage risks across multiple or all risk domains at the same time.


This is the second component in the Risk Domains model to managing risks.


These risk management instruments are:


  1. Risk Transfer

  2. Incident Management

  3. Incident Reporting

  4. Incident Reviews

  5. Risk Management Committee

  6. Medical Screening

  7. Risk Management Reviews

  8. Media Relations

  9. Documentation

  10. Accreditation

  11. Seeing Systems


Risk Management Instruments, which can manage risks across multiple risk domains


Risk Transfer refers to the presence of insurance policies, subcontractors who assume risk, and risk transfer documents like liability waivers.


Incident Management refers to having a documented and practiced plan for responding to emergencies.


Incident Reporting means documenting safety incidents and their potential causes, analyzing incidents individually and in the aggregate, and then developing and disseminating responses (in the form of revised training materials, safety reports, new policies, etc.) to respond to the incidents, and the trends and patterns they illuminate.


Incident Reviews means having a process for the formal review of major incidents, by internal or external review teams.


Risk Management Committee indicates a group of individuals, including those from outside the organization, who have relevant subject matter expertise, and who can provide resources and unbiased guidance.


Medical Screening refers to structures to ensure that participants and staff are medically well-matched to their circumstances.


Risk Management Reviews are formalized, periodic analyses of the organization’s safety practices.


Media Relations refers to staff who have the training and materials to work effectively with newsmedia in the case of a newsworthy safety incident.