The Complete Threat Modeling Guide

Threat Enumeration Techniques


About This Section

This is Part 2 of a four-part comprehensive threat modeling guide. This section focuses on the practical work of actually finding threats, covering ten distinct techniques for systematic threat discovery. If Part 1 was about understanding the map, this part is about knowing how to navigate the territory.


Table of Contents

  1. Element-by-Element Walk
  2. Trust Boundary Analysis
  3. Attacker Persona Technique
  4. Abuse Cases and Misuse Stories
  5. Pattern Libraries and Threat Catalogs
  6. The “What If” Game
  7. Collaborative Techniques
  8. Using Checklists and Frameworks
  9. Validation Techniques
  10. Reality Check: Accepting Imperfection

Techniques at a Glance

Before diving in, here’s a quick overview of when each technique shines:

TechniqueBest ForCoverage TypeTime Investment
Element-by-Element WalkComprehensive component coverageSystematicMedium-High
Trust Boundary AnalysisBoundary crossing attacksRelationship-focusedMedium
Attacker Persona TechniqueHuman-driven attack scenariosMotivation-drivenLow-Medium
Abuse Cases & Misuse StoriesBusiness logic vulnerabilitiesFunction-drivenMedium
Pattern Libraries & CatalogsKnown vulnerabilitiesHistoricalLow
The “What If” GameCreative, unexpected attacksAssumption-challengingLow
Collaborative TechniquesDiverse perspectivesTeam-basedMedium
Checklists & FrameworksCommon vulnerabilitiesStandards-basedLow-Medium
Validation TechniquesVerifying completenessQuality assuranceLow

No single technique is sufficient. The best threat models use multiple techniques together.


Element-by-Element Walk

The most systematic approach to threat enumeration is walking through your data flow diagram element by element, applying your chosen framework (usually STRIDE) to each component. This sounds tedious because it is, but thoroughness matters more than elegance when you’re trying to protect systems.

The Basic Process

Start with your data flow diagram and create a list of every element: external entities, processes, data stores, and data flows. For each element, work through every applicable threat category. Document what you find, move to the next element, and repeat until you’ve covered everything.

The key word is “applicable.” Not every STRIDE category applies to every element type. Trying to analyze categories that don’t apply wastes time and creates confusion. Here’s the matrix:

Element TypeSTRIDE
External Entities-----
Processes
Data Stores--
Data Flows---

External entities are primarily vulnerable to spoofing (can someone impersonate this user or system?). Processes face all six STRIDE categories. Data stores face tampering, repudiation, information disclosure, and denial of service. Data flows face tampering, information disclosure, and denial of service.

The STRIDE Matrix Approach

Creating a matrix helps ensure nothing slips through. Down the left side, list every element from your DFD. Across the top, put the applicable STRIDE categories. Work through each cell, asking the relevant question and documenting threats you identify.

For each applicable category, ask the relevant question:

CategoryKey Question to Ask
SpoofingCan someone impersonate this component or its users? What authentication exists?
TamperingCan someone modify this component or the data it handles? What integrity controls exist?
RepudiationCan someone deny performing an action? What logging exists? Are logs tamper-proof?
Information DisclosureCan someone access information they shouldn’t? Is sensitive data encrypted?
Denial of ServiceCan someone make this unavailable? What availability controls exist?
Elevation of PrivilegeCan someone gain capabilities they shouldn’t have? What authorization exists?

For a web application with user authentication, the matrix might start with the “User” external entity. For Spoofing, you ask whether someone could impersonate a legitimate user. Possible threats include credential stuffing attacks, session hijacking, and social engineering. You document each one with enough detail to act on later.

Then you move to the “Login API” process. For Spoofing, can someone impersonate this API? Maybe through DNS hijacking or a fake login page. For Tampering, can someone modify the API code or behavior? Possible if the deployment pipeline is compromised. For Repudiation, can users deny login attempts? Yes, if logging is insufficient. Work through each cell systematically.

Avoiding Matrix Blindness

The danger of the matrix approach is mechanical thinking. You can fill in boxes without actually thinking about what attackers would do. Some practitioners combat this by requiring at least one threat per cell, which forces deeper analysis. Others set a time minimum per element before moving on.

The best approach is combining the matrix with other techniques from this guide. Use the matrix to ensure coverage, but use attacker personas and “what if” thinking to generate the actual threats.


Trust Boundary Analysis

Trust boundaries are where the interesting attacks happen. When data crosses from a less trusted zone to a more trusted zone, the more trusted side makes assumptions about what it’s receiving. Attackers exploit those assumptions.

Identifying Trust Boundaries

Start by asking where trust levels change in your system. Network boundaries are obvious: internet to DMZ, DMZ to internal network, internal network to database tier. But trust boundaries also exist at process boundaries (web application to backend service), privilege boundaries (user to admin), account boundaries (one user’s data to another’s), and organizational boundaries (your systems to vendor systems).

Boundary TypeExamplesCommon Assumptions to Challenge
NetworkInternet → DMZ, DMZ → Internal, Internal → DatabaseTraffic is encrypted; firewall rules are correct
ProcessWeb App → Backend, Service → ServiceInput is validated; authentication is checked
PrivilegeUser → Admin, Application → OSAuthorization is enforced; permissions are minimal
AccountUser A → User B, Customer → EmployeeAccess controls prevent cross-account access
OrganizationalYour System → Vendor, Internal → PartnerThird parties are secure; contracts are enforced

For each boundary, document what’s on each side, who or what is trusted, what assumptions the trusted side makes about input from the untrusted side, and what validation or controls exist at the boundary.

The Trust Discontinuity Principle

Here’s a useful mental model: every trust boundary represents a discontinuity in assumptions. On one side, certain things are assumed to be true. On the other side, those assumptions may not hold.

Consider the boundary between your web application and your database. The application assumes it’s talking to a legitimate database. The database assumes queries come from an authorized application. What happens if an attacker compromises the application server and sends malicious queries? What if they intercept traffic between application and database?

For each boundary, list the assumptions on each side. Then systematically consider how an attacker might violate each assumption. This generates threats that pure component analysis might miss.

Trust Boundary Template

For each boundary you identify, document:

FieldDescription
Boundary NameDescriptive identifier (e.g., “API Gateway to Internal Services”)
Trusted SideWhat’s on the more trusted side
Untrusted SideWhat’s on the less trusted side
AssumptionsWhat the trusted side assumes about input
Current ControlsSecurity mechanisms at the boundary
Potential AttacksHow an attacker might violate assumptions
Threats IdentifiedSpecific threats from this analysis

Example: API Gateway Boundary

Consider an API gateway that sits between external clients and internal microservices.

The trusted side consists of the internal microservices. The untrusted side includes external clients (mobile apps, web browsers, partner systems).

The gateway assumes that incoming requests have valid authentication tokens, request payloads conform to expected schemas, clients won’t send malicious input, and rate limiting prevents abuse.

Current controls include OAuth 2.0 token validation, JSON schema validation, and rate limiting.

Potential attacks include tokens stolen through XSS or phishing, schema validation bypasses through encoding tricks, malicious payloads that pass validation but exploit downstream services, and rate limit bypasses through distributed requests.

This analysis generates threats specific to the trust boundary that element-by-element analysis might miss because it considers the relationship between components rather than components in isolation.


Attacker Persona Technique

Different attackers have different capabilities, motivations, and constraints. A script kiddie running automated tools against your login page is a different threat than a nation-state actor with months of patience and custom tooling. The attacker persona technique forces you to consider threats from multiple perspectives.

Seven Standard Personas

You don’t need to invent personas from scratch. Seven personas cover most scenarios:

PersonaCapabilitiesMotivationPatienceKey Question
Script KiddiePublic tools onlyLow-hanging fruit, bragging rightsMinutesWhat would automated scanners find?
Opportunistic ExternalModerate skills, multiple techniquesAny exploitable weaknessDays-weeksWhat could a motivated attacker accomplish in a few weeks?
Targeted ExternalAdvanced skills, custom exploitsYour specific data/organizationMonthsWhat if someone really wants to breach your organization?
Malicious InsiderLegitimate access, internal knowledgeRevenge, profit, ideologyMonths-yearsWhat’s the most damaging thing an employee could do?
Compromised InsiderStolen credentials/deviceExternal attacker goalsVariesWhat if an attacker gains access to an employee’s laptop?
Nation-State ActorUnlimited resources, APTStrategic intelligence, sabotageYearsWhat would a government-level adversary target?
Accidental ThreatWell-meaning but mistake-proneNone (errors only)N/AWhat mistakes could cause significant damage?

Using Personas in Practice

For each persona, walk through your system from their perspective. What would they target? What access do they have or could they obtain? What techniques would they use? Where would they start? What would success look like for them?

This generates threats that systematic analysis might miss because it considers the human element. The script kiddie probably won’t find your complex business logic flaw, but they’ll definitely try SQL injection. The malicious insider won’t bother with network attacks when they can just copy files they already have access to.

Persona Prioritization

Not all personas are equally relevant to your system. A small internal HR application probably doesn’t need to consider nation-state actors. A defense contractor probably does.

Consider your data and what it’s worth to various attackers, your industry and whether you’re a common target, your organization’s profile and visibility, and regulatory requirements that mandate considering certain threat actors. Focus deeper analysis on the two or three most relevant personas while still doing basic analysis for others.


Abuse Cases and Misuse Stories

Traditional use cases describe how legitimate users accomplish goals. Abuse cases (also called misuse cases or misuse stories) describe how malicious users accomplish malicious goals. If you’ve written user stories, you can write abuse stories. They follow the same format with different intent.

Writing Abuse Cases

Start with a legitimate use case. “As a customer, I want to reset my password so that I can regain access to my account.” Now invert it. “As an attacker, I want to reset another user’s password so that I can gain access to their account.”

Then detail how the abuse might work. What weaknesses in the password reset flow could an attacker exploit? Maybe the reset link is predictable. Maybe the security questions are guessable from social media. Maybe the email check doesn’t have rate limiting. Each potential exploit path becomes a threat to document and address.

Misuse Story Format

A useful format mirrors agile user stories:

ComponentFormat
Story”As a [malicious role], I want to [malicious action] so that I can [malicious goal]“
PreconditionsWhat the attacker needs before starting
Attack StepsHow they accomplish the goal
PostconditionsWhat changes after successful attack

Example: “As an attacker, I want to enumerate valid usernames so that I can target specific accounts for credential stuffing.”

Preconditions: Access to login endpoint, list of potential usernames from data breach.

Attack steps: Attempt login with each username and random password. Observe response differences (“invalid username” vs “invalid password”). Collect confirmed valid usernames.

Postconditions: Attacker has list of valid usernames to target with credential stuffing or social engineering.

Systematic Abuse Case Generation

For each legitimate function in your system, ask how it could be abused. Take the function’s inputs and consider what happens with malicious input, excessive input, missing input, or input from unauthorized users. Consider timing attacks, resource exhaustion, and information leakage.

FunctionCommon Abuse Cases
User RegistrationMass account creation (bot armies), registration with others’ emails, bypassing rate limits, data harvesting
File UploadMalicious files (malware, scripts), oversized files (storage exhaustion), path traversal filenames, overwriting existing content
SearchSQL injection, information disclosure via detailed results, DoS via expensive queries, data enumeration
Password ResetToken prediction, account enumeration, email flooding, token theft via referrer headers
MessagingSpam, phishing, stored XSS, notification flooding

Integration with Development

Abuse cases integrate naturally with development workflows. When a developer writes a user story, a security team member (or the developer themselves) writes corresponding abuse stories. The abuse stories inform security requirements and test cases.

This shifts security thinking earlier in the process and makes it concrete. “Prevent injection attacks” is abstract. “As an attacker, I want to inject SQL into the search field to dump the user database” is specific and testable.


Pattern Libraries and Threat Catalogs

You don’t need to invent every threat from scratch. Smart people have documented common threats across many systems and industries. Pattern libraries and threat catalogs give you a head start.

Using Threat Catalogs

A threat catalog is a collection of known threats organized for reuse. When you’re threat modeling a web application, you consult the web application threat catalog and see which threats apply. This is faster than brainstorming from scratch and ensures you don’t miss well-known issues.

The process: Review the catalog for threats relevant to your system type and technology. For each potentially relevant threat, assess whether it applies to your specific system. If it applies, add it to your threat model with specifics about how it manifests in your context.

Building Your Own Catalog

While public catalogs exist (we’ll cover them in the frameworks section), the most valuable catalogs are organization-specific. Every time you threat model a system, you generate threats. Catalog the reusable ones.

Structure your catalog by:

CategoryWhat to Include
TechnologyThreats specific to your common tech stack (e.g., Node.js, PostgreSQL, AWS)
PatternThreats that apply to common architectural patterns (e.g., microservices, message queues)
Data TypeThreats specific to handling certain data (e.g., PHI, PCI data, credentials)
IntegrationThreats from common third-party integrations (e.g., OAuth providers, payment processors)

After a few threat modeling exercises, you’ll have a catalog that captures your organization’s specific context. New threat models go faster because you’re not starting from scratch.

Example Catalog Entry

FieldContent
TitleSession Fixation
CategoryAuthentication
Applies ToAny system using session-based authentication
DescriptionAttacker sets a victim’s session ID before they authenticate, then uses that known session ID to access the victim’s session after authentication
Attack VectorAttacker obtains valid session ID → tricks victim into authenticating with that session ID → attacker uses the same session ID
IndicatorsSession ID in URL parameters, session ID doesn’t change after authentication, session ID can be set by client
MitigationsRegenerate session ID on authentication, never accept session ID from URL, set HttpOnly and Secure flags
ReferencesOWASP Session Management Cheat Sheet

The “What If” Game

Sometimes systematic approaches miss creative attacks. The “What If” game deliberately challenges assumptions and explores scenarios your frameworks might not cover.

How to Play

Gather your threat modeling team (or play solo if necessary). Look at your system and start asking “What if?” questions. What if that assumption is wrong? What if that never-happens scenario happens? What if an attacker has capabilities you didn’t expect?

The key is suspending disbelief. In normal analysis, you might dismiss a scenario as unlikely. In the “What If” game, you explore it anyway to see where it leads.

Categories of “What If?”

CategoryExample Questions
Assumption ChallengesWhat if the database isn’t actually encrypted? What if the firewall rules have errors? What if the third-party service is compromised?
Capability ChallengesWhat if the attacker has insider knowledge? What if they’ve compromised an administrator? What if they can intercept encrypted traffic?
Timing ChallengesWhat if the attack happens at 3 AM on Christmas? What if it coincides with your major deploy? What if two unlikely things happen simultaneously?
Scale ChallengesWhat if there are a million requests instead of a hundred? What if the attack persists for months? What if every user is targeted?
Combined ChallengesWhat if an insider is also technically sophisticated? What if a configuration error coincides with an external attack?

Documenting “What If?” Findings

Some “What If?” scenarios won’t lead anywhere useful. Others will reveal real threats. For the useful ones, document the scenario you explored, the threat it revealed, how likely the scenario actually is, what controls would help, and whether this changes any risk ratings.

Don’t discard unlikely scenarios entirely. Document them with low likelihood scores. The next major breach might come from a scenario everyone dismissed as “that would never happen.”

Example Session

System: Healthcare patient portal

“What if the SSL certificate expires and nobody notices?” Users get security warnings. Some users click through anyway. Attacker positions MITM attack, intercepts credentials and PHI. Likelihood seems low because we have monitoring, but… do we alert on expiration? Who responds at 2 AM? Let’s add certificate expiration monitoring to the threat model and verify incident response.

“What if a developer commits credentials to the public repo?” GitHub has scanners, but what about private tokens? Internal APIs? Let’s check if we have secret scanning. What’s the blast radius if various credentials leak? Add “credential exposure” threats with specific credentials identified.

“What if the backup system is compromised?” Attacker could restore an old database to a new location, accessing historical data. Or corrupt backups so recovery fails. Or wait until backup runs, compromise it, then destroy production data. This generates several threats around backup integrity and security.


Collaborative Techniques

Threat modeling is better with diverse perspectives. Security experts see security threats. Developers see implementation vulnerabilities. Operations sees deployment and configuration risks. Business stakeholders see data value and impact. Collaborative techniques harness this diversity.

Technique Summary

TechniqueProcessBest For
Evil BrainstormingTeam adopts attacker mindset for 15-30 minutes; no idea too crazyGenerating creative threats
Structured WalkthroughsWalk through system from different perspectives (data flow, user session, admin functions)Deep understanding of specific flows
Attack/Defend RoleplaySplit team into attackers and defenders; compare plansFinding defensive blind spots
Silent BrainstormingEveryone writes threats individually before discussionIncluding quieter team members, avoiding groupthink

Evil Brainstorming

Also called “black hat brainstorming,” this technique has team members temporarily adopt an attacker mindset. The rules are simple: for a fixed time period (15-30 minutes), everyone tries to think of ways to attack the system. No idea is too crazy. Quantity matters more than quality initially.

Facilitation tips: Remind people to think like attackers, not defenders. Defer judgment, meaning no shooting down ideas during brainstorming. Build on each other’s ideas. Cover different attacker types (external, internal, accidental). Have someone capture everything, as filtering happens later.

After brainstorming, review the ideas and identify which represent real threats. Many won’t survive scrutiny, but some will reveal issues nobody would have found alone.

Structured Walkthroughs

In a structured walkthrough, the team walks through the system from different perspectives. One walkthrough might follow data as it moves through the system, asking what could go wrong at each step. Another might follow a user session from login to logout. Another might follow administrative functions.

Having different team members lead different walkthroughs works well. The developer who built the authentication system leads the authentication walkthrough. The ops engineer leads the infrastructure walkthrough. Each brings expertise to their area while others ask questions from outside perspectives.

Attack/Defend Roleplay

Split the team into attackers and defenders. Attackers spend time (20-30 minutes) developing attack plans against the system. Defenders spend the same time predicting what attackers will try and planning defenses. Then compare notes.

This reveals mismatches between what security teams defend against and what attackers would actually do. It also surfaces assumptions defenders have made that attackers don’t share.

Silent Brainstorming

Not everyone speaks up in group sessions. Silent brainstorming (also called brainwriting) has everyone write threats individually before group discussion. Give everyone sticky notes and markers. Set a timer for 10-15 minutes. Everyone writes threats, one per note. Then collect and cluster similar threats before discussing.

This ensures quieter team members contribute and prevents groupthink where everyone follows the first few ideas.


Using Checklists and Frameworks

Checklists and frameworks provide external expertise. Someone has already cataloged common threats against web applications, cloud deployments, mobile apps, and APIs. Use their work.

OWASP Top 10

The Open Web Application Security Project maintains a list of the ten most critical web application security risks, updated periodically. As of the 2021 version:

#CategoryThreat Modeling Question
1Broken Access ControlCan users access data or functions they shouldn’t?
2Cryptographic FailuresIs sensitive data properly encrypted at rest and in transit?
3InjectionCan untrusted data be interpreted as code (SQL, commands, etc.)?
4Insecure DesignAre security controls missing from the architecture itself?
5Security MisconfigurationAre defaults secure? Is hardening complete?
6Vulnerable ComponentsAre dependencies current and free of known vulnerabilities?
7Authentication FailuresCan authentication be bypassed or abused?
8Software/Data Integrity FailuresCan updates, dependencies, or data be tampered with?
9Logging/Monitoring FailuresWould attacks be detected? Can incidents be investigated?
10Server-Side Request ForgeryCan the server be tricked into making unintended requests?

Using it in threat modeling: For web applications, walk through each OWASP Top 10 category and ask whether your system is vulnerable. “Do we have broken access control anywhere?” generates specific threats to document.

Don’t treat it as exhaustive. The Top 10 represents common risks, not all risks. Use it as a starting point, not an ending point.

CWE Top 25

Common Weakness Enumeration maintains a list of the 25 most dangerous software weaknesses based on prevalence and severity data. These are more granular than OWASP Top 10, focusing on specific vulnerability types like out-of-bounds write, improper input validation, SQL injection, and so forth.

Using it in threat modeling: Review each CWE against your codebase. Are you doing anything that could introduce this weakness? CWE entries include detailed descriptions, examples, and mitigations.

MITRE ATT&CK

ATT&CK (Adversarial Tactics, Techniques, and Common Knowledge) is a knowledge base of adversary behavior. Unlike vulnerability-focused frameworks, ATT&CK documents what attackers actually do: their tactics (why), techniques (how), and procedures (specific implementations).

Using it in threat modeling: ATT&CK helps you think about attack chains rather than individual vulnerabilities. Look at techniques in relevant ATT&CK matrices (Enterprise, Mobile, ICS) and consider which could be used against your system. This is particularly valuable for detection planning: if attackers use technique X, how would you detect it?

NIST Frameworks

NIST Cybersecurity Framework and NIST 800-53 provide control catalogs. While primarily for compliance, they’re useful threat modeling inputs. If a control category exists, threats exist that require it. Review NIST categories to ensure your threat model covers the corresponding threat types.

Industry-Specific Frameworks

IndustryFrameworksFocus Areas
HealthcareHITRUST, HIPAA Security RulePHI protection, patient privacy
FinancePCI-DSS, SOXCardholder data, financial controls
GovernmentFedRAMP, NIST 800-53Federal systems, classified data
GeneralISO 27001, SOC 2Information security management

Your industry likely has frameworks that highlight domain-specific threats and required controls. Incorporate them into your threat modeling.


Validation Techniques

Finding threats is only valuable if you find the right threats. Validation ensures your threat model is comprehensive and accurate.

Validation Approaches

TechniquePurposeWho’s Involved
Coverage AnalysisEnsure all elements have associated threatsThreat modeling team
Attack Path AnalysisVerify complete attack chains existSecurity architect
Red Team ReviewGet offensive perspective on blind spotsRed team or pentesters
Historical ComparisonCheck if known breach patterns are coveredSecurity researcher
Stakeholder ReviewCatch domain-specific errorsDev, ops, business, legal

Coverage Analysis

After generating threats, map them back to your data flow diagram. Every element should have associated threats. Every trust boundary should have threats specific to boundary crossing. Every data store containing sensitive data should have information disclosure threats.

Gaps in coverage indicate areas where you need to do more work. An element with no threats might be genuinely secure (rare), might have threats you missed (common), or might be so boring that threats are low-priority (possible). Investigate to determine which.

Attack Path Analysis

Can you trace complete attack paths from attacker entry to impact? A threat like “SQL injection in search field” is a step, not a complete path. The path might be: attacker finds SQL injection, extracts database credentials, uses credentials to access database directly, exfiltrates customer data.

If your threat model has individual threats but not connected paths, you might miss the bigger picture. Attack path analysis ensures you understand how individual weaknesses combine.

Red Team Review

If you have a red team or penetration testing capability, have them review your threat model. Ask them what you missed. Ask what they would target. Their practical experience attacking systems reveals blind spots in theoretical analysis.

Even without a formal red team, have someone with offensive security experience review the threat model. The “attacker eye test” catches things systematic analysis misses.

Historical Comparison

Review breaches and incidents at similar organizations. Were those threats in your model? If a hospital in another state was breached via a specific technique, is that technique in your healthcare system’s threat model?

Historical incidents are free threat intelligence. Someone else already learned the hard way. Use their lessons.

Stakeholder Review

Have different stakeholders review the threat model for their areas. Development reviews technical accuracy. Operations reviews infrastructure coverage. Business reviews impact assessments. Legal reviews compliance implications.

Each stakeholder catches errors and gaps the others miss. Collective review produces better results than any individual reviewer.


Reality Check: Accepting Imperfection

Here’s an uncomfortable truth: no threat model is complete. You will miss things. Attackers will find vulnerabilities you didn’t consider. Systems will fail in ways you didn’t predict. This isn’t failure; it’s reality.

Why Perfect is Impossible

Attackers have advantages you can’t overcome:

Attacker AdvantageYour Challenge
Only need one successMust defend everything
Can take unlimited timeHave deadlines
Use techniques that don’t exist yetCan only model known threats
Understand your system through reconnaissanceRely on documentation that may be wrong

Your threat model is a point-in-time snapshot of your current understanding. Both your system and the threat landscape change continuously. The model is outdated the moment you finish it.

What to Do About It

Accept imperfection but don’t accept complacency. A comprehensive threat model that misses some things is infinitely better than no threat model at all. The goal isn’t perfection; it’s systematic risk reduction.

Build in reviews and updates. Treat the threat model as a living document. When you learn about new threats or discover your model was wrong, update it.

Create defense in depth. Since you’ll miss things, assume some attacks will succeed and plan for containment and recovery. Detection is as important as prevention.

Learn from incidents. Every security event is an opportunity to improve your threat model. Was this threat documented? If yes, why did it succeed? If no, why was it missed?

The Diminishing Returns Curve

Threat modeling has diminishing returns. The first hour of analysis finds critical threats. The tenth hour finds moderate threats. The hundredth hour finds increasingly theoretical concerns.

Know when to stop. A threat model comprehensive enough to drive meaningful security improvements is valuable. A threat model so exhaustive that it takes months and produces thousands of low-value findings is not. Find the balance for your organization and risk profile.


Summary and Next Steps

This concludes Part 2: Threat Enumeration Techniques. You now have ten different approaches for finding threats: element-by-element analysis with STRIDE, trust boundary analysis, attacker personas, abuse cases, pattern libraries, the “What If” game, collaborative techniques, established checklists and frameworks, validation approaches, and realistic expectations about completeness.

No single technique is sufficient. The best threat models use multiple techniques together. STRIDE provides structure. Personas add human context. Abuse cases catch business logic issues. Checklists ensure you don’t miss common problems. “What If” finds the creative attacks. Validation confirms you haven’t fooled yourself.

Continue to Part 3 for a complete worked example applying these techniques to a real healthcare system. Seeing everything in practice clarifies what can seem abstract in isolation.


Part 2 Complete | Continue to Part 3 →