In brief: Security risk assessment addresses intentional threats by adaptive adversaries, not accidental hazards, and requires a fundamentally different analytical approach from safety risk assessment. ISO 31000 provides the framework and HB 167 provides the security-specific methodology using a Threat-Vulnerability-Consequence (TVC) model. Most built environment projects receive assessments that describe a problem space rather than assess risk.
A security risk assessment lands on the project director's desk. It opens with a threat overview citing global terrorism statistics. It lists the asset's characteristics. It presents a risk matrix with colour-coded cells. It concludes with a set of recommendations: access control, CCTV, lighting, perimeter barriers, security staffing. The project director reads it, approves the spend, and moves on.
Six months later, at a design gate review, someone asks why the access control system costs $2.4 million. The project director reaches for the security risk assessment to find the justification. There is no justification. The recommendation says "access control" but does not connect it to a specific threat scenario, does not explain what vulnerability it addresses, does not document why this level of control is proportionate, and does not establish what residual risk remains after implementation. The assessment described a problem space. It did not assess risk.
Why security risk is different
Security risk assessment is not safety risk assessment applied to a different topic. The analytical logic is fundamentally different, and confusing the two produces assessments that look structured but do not hold under scrutiny.
Intentional adversaries, not accidental hazards. Safety risk addresses accidental events: equipment failure, human error, natural disasters. These events have broadly predictable frequency distributions. Security risk addresses intentional acts by people who choose targets, adapt tactics, and respond to countermeasures. Jore (2017) established this distinction as the defining feature of security as a discipline: adversaries are intelligent, adaptive, and responsive. A safety hazard does not change its behaviour because you installed a barrier. A security threat does.
Scenarios, not frequencies. Because adversaries adapt, historical attack frequency is a poor predictor of future targeting. Brown & Cox (2010) demonstrated mathematically that probabilistic risk assessment can mislead when applied to terrorism, because hardening one target displaces threat to softer targets. Aven & Renn (2008) argued for scenario-based assessment over frequency-based probability for exactly this reason. The question is not "how often has this happened?" but "what could a capable, motivated adversary do to this site, and what would the consequences be?"
Expert judgement, structured by method. Quantitative probability estimates for security events carry false precision. Ezell et al. (2010) recommended risk-informed rather than risk-based decision-making for terrorism contexts, precisely because the data does not support actuarial analysis. Brooks (2011) validated expert judgement as the appropriate basis for security risk assessment. The role of methodology (HB 167, ISO 31000) is to structure that judgement so it is transparent, repeatable, and auditable, not to replace it with calculation.
Projects that apply safety risk thinking to security problems produce assessments that look rigorous but miss the point. They calculate probabilities from historical data that does not represent the threat. They score risks mechanically without considering adversary decision-making. They recommend treatments without connecting them to the scenarios they are supposed to address.
The framework: ISO 31000 and HB 167
Two standards provide the architecture for security risk assessment in Australia.
ISO 31000:2018 defines risk as "the effect of uncertainty on objectives" and establishes a five-step process: establish context, identify risks, analyse risks, evaluate risks, treat risks, with communication and monitoring throughout. Purdy (2010) emphasised that ISO 31000 deliberately supports qualitative approaches and that context-setting is the critical first step. Lalonde & Boiral (2012) showed that ISO 31000's generic framework requires domain-specific guidance to be effective. For security, that guidance is HB 167.
HB 167:2006 (Security Risk Management) adapts ISO 31000's process for security contexts using the Threat-Vulnerability-Consequence (TVC) model. Risk is a function of three components that must all be assessed:
- Threat: capability, intent, history, and opportunity of adversaries
- Vulnerability: attractiveness, accessibility of the target
- Consequence: human, economic, administration, and environmental impacts
This is a direct descendant of Kaplan & Garrick's (1981) foundational risk triplet: what can go wrong, how likely is it, and how bad would it be? The TVC model operationalises this triplet for security by decomposing each element into assessable components. The US government's DHS framework (Moteff, 2004; Willis et al., 2005) converged independently on the same TVC structure, suggesting this is not an arbitrary framework but a robust analytical pattern.
HB 167 also introduces the treatment hierarchy: avoid, reduce likelihood, reduce consequence, transfer, accept. Treatments must be justified against the SFAIRP principle (discussed below), and residual risk must be documented after treatment is applied.
Step 1: Establish context and criteria
Every decision made later in the assessment depends on what is established here. Context-setting is not a preamble. It is the analytical foundation.
Scope and boundaries. What assets, people, information, and operations are within scope? What project phase applies (concept, detailed design, operations)? What is the assessment's decision purpose: informing design, justifying expenditure, satisfying a planning condition, or supporting an assurance gate?
Risk criteria. Before any risk is identified, the criteria for evaluating it must be defined. What level of risk is tolerable? What triggers treatment? What is unacceptable? The HSE's R2P2 framework (2001) established the three-zone model (unacceptable, tolerable/ALARP, broadly acceptable) that HB 167 adopts. Van Coile et al. (2018) demonstrated that these criteria must be calibrated to context, because what is tolerable for one facility type is not tolerable for another. A suburban office building and a diplomatic precinct have different risk appetites. If the criteria are not defined before assessment, the evaluation step becomes arbitrary.
Stakeholders and information sources. Who contributes to the assessment? Renn (2008) argued that risk governance must involve multiple stakeholders for both analytical quality and decision legitimacy. For built environment projects, this typically includes the client, design team, facility operator, law enforcement, and relevant government agencies. The assessment is only as good as the information it draws on.
Step 2: Identify threats
Threat identification asks: what adversaries could target this asset, with what intent, using what capability?
This is not a literature review of global terrorism. It is a structured analysis of the threat scenarios credible for this specific site, informed by intelligence, crime data, and scenario analysis.
Structured analysis, not speculation. Heuer & Pherson (2010) codified over 50 structured analytic techniques (SATs) used in intelligence analysis, designed to mitigate cognitive biases in expert judgement. Key techniques for security risk include Analysis of Competing Hypotheses, Key Assumptions Check, and Red Team Analysis. Borrion (2013) developed crime scripting as a systematic method for decomposing attack scenarios into sequential steps (preparation, target selection, approach, execution, escape), with each step identifying intervention opportunities.
Threat as a function of adversary and target. HB 167 assesses threat through four components: adversary capability (what can they do?), intent (what do they want to do?), history (what have they done?), and opportunity (what does the environment allow?). Clarke & Newman (2006) added the "EVIL DONE" framework for target attractiveness: Exposed, Vital, Iconic, Legitimate, Destructible, Occupied, Near, Easy. The intersection of adversary characteristics and target characteristics determines the credible threat profile.
Multiple threat types. A single site may face different threat types from different adversaries: terrorism, crime, protest, insider threat, espionage. Each has different characteristics and requires different analytical treatment. Lumping them into a single "security threat" category produces analysis too blunt to inform design decisions.
Parnell et al. (2009) demonstrated that intelligent adversaries optimise against defender strategies. The threat assessment must therefore consider how adversaries might respond to proposed security measures, not just their current behaviour. A static threat assessment that does not account for adversary adaptation is incomplete.
Step 3: Analyse vulnerability and consequence
For each credible threat scenario, the assessment must determine: how exposed is the asset, and what would the impact be?
Vulnerability is not a generic score. HB 167 decomposes vulnerability into attractiveness (how desirable is this target to this adversary?), accessibility (how easy is it to reach?), organic security (what existing protection exists?), and population density (how many people are exposed?). Each component is assessed against each specific threat scenario. A site can be highly vulnerable to one threat type and well-protected against another. A single vulnerability rating for the entire site is meaningless.
Layered assessment. Coole et al. (2012) analysed layered security concepts across military, industrial, and corporate domains, arguing for "security in depth": multiple independent layers where failure of one does not compromise overall protection. Vulnerability assessment should evaluate each layer: deterrence, detection, delay, response. Where layers are absent or weak, vulnerability is high for the threat scenarios that exploit those gaps.
Consequence across dimensions. HB 167 assesses consequence across four dimensions: human impact (injury, death), economic impact (direct costs, business interruption), administration impact (governance disruption, service failure), and environmental impact. Each dimension is assessed at a defined scale, and the highest consequence drives the risk evaluation. Reducing consequence assessment to a single number collapses distinctions that matter for treatment design.
The risk matrix, used properly. Cox (2008) demonstrated that risk matrices can produce worse-than-random prioritisation through range compression, centering bias, and inconsistent scoring. Duijm (2015) responded with practical design guidance: appropriate axis scaling (logarithmic where possible), calibrated category boundaries, and explicit risk criteria that drive the colour coding. A well-designed matrix with trained assessors is a legitimate screening tool. A poorly designed matrix with ad hoc categories is a randomiser with colour coding. The difference is in the design, not the format.
Step 4: Evaluate risk
Risk evaluation compares the analysed risk profile against the criteria established in Step 1. This is where the assessment becomes a decision instrument.
Three-zone evaluation. Following the HSE R2P2 framework and HB 167, risks fall into three zones: unacceptable (must be reduced regardless of cost), tolerable (acceptable only if reduced SFAIRP), and broadly acceptable (no further treatment required). The boundaries between these zones are defined by the risk criteria, not by the assessor's intuition. Marhavilas et al. (2021) reviewed risk acceptance criteria across industries and confirmed international convergence on this three-zone model.
Evaluation drives treatment priority. Risks in the unacceptable zone demand immediate treatment. Risks in the tolerable zone require SFAIRP justification. Risks in the broadly acceptable zone are monitored but not treated. Without this structure, every identified risk receives equal attention, and the assessment cannot tell the project director where to invest first.
Risk-informed, not risk-based. Zio & Pedroni (2012) drew the critical distinction: risk-informed decision-making uses risk assessment as one input alongside values, constraints, stakeholder concerns, and uncertainty. Risk-based decision-making treats the risk score as determinative. Security risk assessment should inform decisions, not automate them. The project director uses the assessment. The assessment does not replace the project director.
Step 5: Treat risk (and justify it)
Treatment converts the risk evaluation into design, operational, and management responses. Each treatment must be connected to the risk it addresses and justified against the SFAIRP principle.
The treatment hierarchy. HB 167 establishes a hierarchy: avoid the risk (eliminate the activity or exposure), reduce likelihood (make the attack harder or less likely to succeed), reduce consequence (limit the impact if an attack occurs), transfer the risk (insurance, contractual allocation), accept the risk (document the residual risk and the rationale for acceptance). Treatments higher in the hierarchy are preferred. Freilich et al. (2019) confirmed that Situational Crime Prevention mechanisms (increase effort, increase risks, reduce rewards, reduce provocations, remove excuses) map directly to this hierarchy, providing the criminological evidence base for treatment selection.
SFAIRP: what it means and what it does not. Security risks under WHS Legislation in Australia are considered 'foreseeable' and as such, duty holders are required to manage and reduce them SFAIRP (So Far As Is Reasonably Practicable). Jones-Lee & Aven (2011) provided the definitive analysis: SFAIRP requires that risks be reduced unless the cost of further reduction would be grossly disproportionate to the benefit. This is not cost-benefit analysis. The test is gross disproportion, not mere imbalance. The burden of proof lies with the duty holder to demonstrate that risks have been reduced to a reasonable level, not with the regulator to show they have not. Melchers (2001) warned that SFAIRP boundaries are inherently judgemental, requiring transparency in how decisions are made and documented.
Systems thinking in treatment. Langdalen et al. (2020) argued that ALARP/SFAIRP must be applied at the systems level, not the component level. Hardening one area may displace risk to another. Brown & Cox's (2010) "whack-a-mole" problem applies directly: reducing vulnerability at one point may increase it elsewhere if the adversary adapts. Treatment design must consider the system, not individual risks in isolation.
Traceability. Every treatment should trace to the threat scenario and vulnerability it addresses. This serves three functions: it justifies the cost to the client, it enables review when the threat environment changes, and it provides the audit trail for assurance. A treatment without a traceable risk origin is an assertion, not a justified measure. Stewart & Mueller (2012) demonstrated that many security investments fail basic cost-effectiveness tests. Traceability is the mechanism for testing proportionality.
What you are probably getting
Based on our experience reviewing security risk assessments across transport, infrastructure, and urban development projects, a consistent pattern emerges.
| What is typically present | What is typically missing |
|---|---|
| Global or national threat overview | Site-specific threat scenarios with adversary profiling |
| Asset description and context | Vulnerability analysis decomposed by threat type |
| A risk matrix with coloured cells | Documented risk criteria established before assessment |
| List of security recommendations | Treatments traced to specific threat-vulnerability pairings |
| Reference to ISO 31000 or HB 167 | Actual application of the TVC methodology |
| Professional formatting and graphics | SFAIRP justification for treatment decisions |
| Cost estimate for recommendations | Residual risk documentation after treatment |
The standard references are cited. The TVC model is mentioned. The risk matrix appears. But the analytical work that should connect these elements is absent. Threat scenarios are generic. Vulnerability is a single score, not a decomposed analysis. Consequence is stated, not assessed. Treatments are listed, not justified. Residual risk is not documented.
Heyerdahl (2022) documented the same pattern in Norway's shift to risk-based protective security regulation after 2011: organisations required to conduct security risk assessments often lack the competence to do so, producing superficial compliance rather than genuine risk management. McIlhatton et al. (2018) identified the same gap in crowded places protection: practitioners need decision-ready risk information, but what they receive is either too generic to act on or too complex to interpret.
Kasperson et al. (2022) explained why this matters beyond the assessment itself: how risk is communicated determines how it is perceived and acted upon. A poorly structured assessment does not just fail to inform decisions. It actively misinforms them by creating confidence in a risk position that has not been analytically tested.
What to specify when commissioning
If you are procuring a security risk assessment for a built environment project, specify the process and the standard of work, not just the deliverable title. Based on ISO 31000, HB 167, and the evidence on what produces credible, decision-ready assessments:
- Methodology aligned with ISO 31000:2018 and HB 167:2006, with the TVC model applied (not just referenced)
- Risk criteria defined at the outset, before any risks are identified, calibrated to the project's risk appetite and decision context
- Threat scenarios specific to the site, informed by intelligence, crime data, and structured analysis, not generic national threat statements
- Vulnerability analysis decomposed by threat type, assessing attractiveness, accessibility, organic security, and population density against each credible scenario
- Consequence assessment across multiple dimensions (human, economic, administration, environmental) with defined severity scales
- Risk evaluation against the defined criteria, with risks allocated to the three-zone model (unacceptable, tolerable/SFAIRP, broadly acceptable)
- Treatment options with SFAIRP justification, traced to the risks they address, with residual risk documented after treatment
- Independence: the security risk assessment should be conducted independently of any product or system supplier whose commercial interest is in the treatment recommendations
- Integration with the design process: security risk findings presented at design decision points, not delivered as a standalone report after design is locked
None of this exceeds what the standards already require. The gap exists because many projects specify a "security risk assessment" without defining what that means, and the market has optimised for the minimum that satisfies the commissioning brief.
You can close that gap by specifying what you need.