The Complete Threat Modeling Guide

Introduction & Methodologies


About This Guide

This is Part 1 of a 4-part comprehensive threat modeling guide. This section covers the foundational concepts and all major methodologies you need to begin threat modeling professionally.

Complete Guide Structure:

  • Part 1 (This Document): Introduction & Methodologies
  • Part 2: Threat Enumeration Techniques
  • Part 3: Healthcare Example - Complete PASTA Walkthrough
  • Part 4: Best Practices, Templates & Resources

Table of Contents - Part 1

  1. Introduction to Threat Modeling
  2. Core Methodologies
  3. The Threat Modeling Process

Introduction to Threat Modeling

What is Threat Modeling?

Threat modeling is the structured process of identifying potential security threats to a system, understanding what makes those threats possible, and determining how to mitigate them. At its core, threat modeling answers three fundamental questions:

  1. What are we building? (System understanding)
  2. What can go wrong? (Threat identification)
  3. What are we going to do about it? (Mitigation planning)

The practice sits at the intersection of security, architecture, and risk management. You’re essentially trying to think like an attacker while designing like a defender.

Why Threat Modeling Matters

Threat modeling shifts security left in the development lifecycle. Finding vulnerabilities during design costs dramatically less than discovering them in production, and the economics are stark. A fix during the design phase might cost $100 (change a diagram, update requirements). During development, that same fix costs $1,000 (code changes, testing). By the testing phase, you’re looking at $10,000 (regression testing, delays). And in production? $100,000 or more (emergency patches, downtime, potential breach). The 100x cost multiplier isn’t hyperbole; it’s what organizations actually experience.

Beyond cost efficiency, threat modeling delivers several key benefits. It forces risk prioritization, focusing security resources where they matter most. It supports compliance with regulatory requirements like HIPAA, PCI-DSS, SOC 2, and GDPR. It enables security by design, building protection into architecture from day one rather than bolting it on later. It creates shared understanding between security, development, and business teams. And it demonstrates due diligence to stakeholders that security is taken seriously.

The real-world impact of skipping threat modeling shows up in breach reports with depressing regularity. The Capital One breach in 2019 cost $80M in fines and affected over 100 million customers, stemming from AWS infrastructure misconfigurations that proper threat modeling would have caught. The Equifax breach in 2017 resulted in a $700M settlement affecting 147 million people, caused by a known vulnerability in a web framework that wasn’t properly threat modeled. The Target breach in 2013 led to an $18M settlement and 41 million stolen cards, all because third-party HVAC vendor integration risks weren’t analyzed.

When to Threat Model

The most critical time to threat model is during new system design, before writing any code. Architecture decisions are easy to change on paper, and this is when you can set security requirements early and build threat awareness into the team.

Major features warrant threat modeling because new functionality means new attack surface. Integration points are particularly high-risk. Adding payment processing, file uploads, or third-party APIs all deserve focused security analysis.

Architecture changes require revisiting your threat model because trust boundaries shift when you modify system boundaries or data flows. New technologies introduce new risks. Cloud migration or infrastructure changes fundamentally alter your security assumptions.

Third-party integrations should trigger threat modeling before you connect to external systems. Supply chain risks are real. Data sharing has implications you need to think through. Your trust assumptions about the third party may be wrong.

Periodic reviews (quarterly or annually for existing systems) catch the gradual drift that happens as threats evolve with new attack techniques, systems change incrementally, and assumptions need validation.

Post-incident analysis should include threat modeling to learn from failures, validate gaps in your existing threat model, and improve your methodology for next time.

That said, don’t threat model when you’re answering simple, well-understood questions, for trivial changes with no security impact, as a pure compliance checkbox (it must be meaningful), or without stakeholder commitment to act on findings.


Core Methodologies

Multiple threat modeling frameworks exist, each with strengths and appropriate use cases. Expert threat modelers often combine methodologies for comprehensive coverage. Before diving into details, here’s a quick comparison to help you orient:

MethodologyPrimary FocusBest ForTime Investment
STRIDEComponent-level threatsDevelopment teams, systematic analysisLow-Medium
PASTABusiness risk alignmentEnterprise apps, complianceHigh
DREADRisk scoringPrioritization after threat identificationLow
OCTAVEOrganizational riskExecutive communication, strategyHigh
LINDDUNPrivacy threatsGDPR/CCPA compliance, personal dataMedium

STRIDE Methodology

STRIDE was developed by Microsoft in 1999 and remains the most widely used threat modeling framework. It categorizes threats into six types, providing a systematic way to examine each component of a system.

Threat CategoryDefinitionSecurity Property ViolatedExample Scenario
SpoofingImpersonating something or someone elseAuthenticationAttacker uses stolen credentials to access system as legitimate user
TamperingModifying data or codeIntegrityMan-in-the-middle attack modifies data in transit; attacker alters database records
RepudiationClaiming to not have performed an actionNon-repudiationUser denies making a transaction; no audit trail exists to prove otherwise
Information DisclosureExposing information to unauthorized partiesConfidentialityDatabase backup left publicly accessible; sensitive data in error messages
Denial of ServiceDenying or degrading service to usersAvailabilityDDoS attack overwhelming servers; resource exhaustion
Elevation of PrivilegeGaining unauthorized capabilitiesAuthorizationUser exploits vulnerability to gain admin access; horizontal privilege escalation

How to Use STRIDE

The process starts with creating a Data Flow Diagram showing external entities (users, systems), processes (code, services), data stores (databases, files), data flows (connections between elements), and trust boundaries (where trust level changes).

Then you apply STRIDE to each element. The key insight is that not all STRIDE categories apply to all element types. External entities are primarily vulnerable to spoofing (can someone impersonate this user or system?). Processes face all six STRIDE categories. Data stores face tampering, repudiation, information disclosure, and denial of service. Data flows face tampering, information disclosure, and denial of service. Trying to analyze categories that don’t apply wastes time and creates confusion.

Element TypeSTRIDE
External Entities-----
Processes
Data Stores--
Data Flows---

STRIDE Example: Login Process

Let’s apply STRIDE to a web application login process to see how this works in practice:

For Spoofing, you’d identify threats like an attacker using stolen credentials to impersonate a user, or session token theft allowing impersonation.

For Tampering, you’d consider MITM attacks modifying login request/response, or SQL injection tampering with authentication logic.

For Repudiation, you’d think about users denying login attempts if logging is insufficient, or attackers clearing logs to hide compromise.

For Information Disclosure, you’d examine whether credentials could be exposed in logs or error messages, or whether username enumeration is possible via different error messages.

For Denial of Service, you’d analyze account lockout attacks preventing legitimate access, or resource exhaustion from excessive login attempts.

For Elevation of Privilege, you’d look at SQL injection bypassing authentication, or password reset flows allowing account takeover.

STRIDE’s strengths include being comprehensive and systematic, easy to teach and learn, an industry standard with extensive documentation, working well with data flow diagrams, and being good for component-level analysis. Its weaknesses are that it can become mechanical without proper context, doesn’t inherently prioritize threats (you need separate risk scoring), may miss business logic vulnerabilities, and requires good understanding of system architecture.

STRIDE works best for component-level security analysis, systematic threat coverage, teams new to threat modeling, and technical systems with clear boundaries.


PASTA (Process for Attack Simulation and Threat Analysis)

PASTA is a seven-stage, risk-centric methodology that explicitly connects security to business objectives. Unlike STRIDE’s component focus, PASTA takes a holistic view from business context through risk analysis.

StagePurposeKey Outputs
1. Define Business ObjectivesUnderstand what you’re protecting and whyRisk appetite, security objectives, compliance requirements
2. Define Technical ScopeMap the attack surface comprehensivelyComponent inventory, data classification, dependencies
3. Application DecompositionCreate detailed data flow diagramsDFDs, trust boundaries, asset inventory
4. Threat AnalysisIdentify threats at multiple levelsThreat scenarios by industry, application, infrastructure
5. Vulnerability & Weakness AnalysisMap concrete weaknesses to identified threatsCVEs, misconfigurations, code vulnerabilities
6. Attack ModelingModel realistic attack scenariosAttack trees, kill chains, chokepoints
7. Risk & Impact AnalysisScore, prioritize, and create remediation planRisk scores, business impact, mitigation roadmap

Stage 1: Define Business Objectives establishes what you’re protecting and why. You identify business drivers and goals, define security objectives aligned with business, understand regulatory requirements, determine risk appetite, and identify key stakeholders. The questions you’re answering include what business value the system provides, what the consequences of system failure are, what regulations apply, what risks the business will accept versus mitigate, and who makes security decisions.

Stage 2: Define Technical Scope maps the attack surface comprehensively. You document architecture and infrastructure, identify all system components, map network boundaries, list third-party dependencies, and classify data sensitivity.

Stage 3: Application Decomposition creates detailed data flow diagrams and identifies trust boundaries. You draw data flow diagrams, mark trust boundaries, document data flows, identify assets, and note security controls.

Stage 4: Threat Analysis identifies threats at multiple levels. You review industry threat intelligence, apply threat frameworks like STRIDE and MITRE ATT&CK, analyze threats at each trust boundary, consider different attacker personas, and document threat scenarios.

Stage 5: Vulnerability & Weakness Analysis maps concrete weaknesses to identified threats. You conduct code review for vulnerabilities, configuration analysis, dependency scanning, penetration testing if available, and map CVEs and CWEs.

Stage 6: Attack Modeling models realistic attack scenarios using attack trees and kill chains. You build attack trees for high-value goals, map attacks to kill chain stages, calculate attack path probabilities, and identify critical chokepoints.

Stage 7: Risk & Impact Analysis scores, prioritizes, and creates an actionable remediation plan. You calculate risk scores (likelihood times impact), quantify business impact, prioritize threats, define mitigations with owners and timelines, and create a risk treatment plan.

PASTA’s strengths include being business-focused with clear ROI, providing comprehensive coverage across all stages, having risk-based prioritization built in, being excellent for stakeholder communication, and being suitable for compliance requirements. Its weaknesses are that it’s time-intensive (the full process takes weeks), requires broad organizational input, has a steeper learning curve than STRIDE, and can be overkill for simple systems.

PASTA works best for enterprise applications, compliance-driven environments like healthcare and finance, systems with significant business impact, situations where stakeholder buy-in is critical, and risk-based decision making.


DREAD Risk Rating System

DREAD is a risk assessment model developed by Microsoft. While Microsoft deprecated it for internal use in 2008, many organizations still find it valuable for risk scoring. It provides a numeric framework for rating threat severity.

Each factor is rated 1-10, then averaged for a risk score:

FactorQuestionLow (1-3)Medium (4-7)High (8-10)
DamageHow bad would an attack be?Minimal impact, easily recoveredSignificant impact, notable recovery effortCatastrophic, business-ending
ReproducibilityHow easy is it to reproduce?Difficult, requires perfect timingPossible with some effortSimple, works every time
ExploitabilityHow much work to launch attack?Expert skills, custom toolsModerate skills, available toolsScript kiddie, automated
Affected UsersHow many people impacted?<1% of users1-50% of users50-100% of users
DiscoverabilityHow easy to discover threat?Obscure, unlikely to findCould be discovered with analysisObvious, easily found

The calculation is simple: Risk Score = (D + R + E + A + D) / 5. Scores from 8.0-10.0 are critical, 6.0-7.9 are high, 4.0-5.9 are medium, and 1.0-3.9 are low.

Example DREAD Scoring

Consider SQL Injection in a search function. Damage would be 10 because it provides full database access, exposing all patient records in a potential business-ending breach. Reproducibility is 10 because it works every time and is simple to reproduce. Exploitability is 7 because it requires intermediate skill but is a well-known attack technique with many tools available. Affected Users is 10 because all 5 million patient records and 100% of users would be affected. Discoverability is 8 because it’s easy to find with automated scanners and is a common vulnerability to test. The DREAD Score equals (10 + 10 + 7 + 10 + 8) / 5 = 9.0, which is critical.

DREAD’s strengths include simple numeric scoring, ease of understanding and explanation, consistent rating framework, being good for comparing threats, and being stakeholder-friendly. Its weaknesses are subjective ratings (different raters may score differently), potential to oversimplify complex risks, no explicit consideration of likelihood, Microsoft having deprecated it (though many still use it), and factors that may not weight equally in reality.

DREAD works best for prioritizing threats after identification, comparing relative risk levels, quick risk assessment, and situations where numeric scores are needed for tracking.


OCTAVE (Operationally Critical Threat, Asset, and Vulnerability Evaluation)

OCTAVE was developed by Carnegie Mellon’s CERT Coordination Center. Unlike technical threat models, OCTAVE focuses on organizational risk and is driven by business stakeholders rather than technical staff.

The key principles are that it’s organizationally driven (business units lead the process), self-directed (the organization evaluates itself), asset-focused (start with critical assets, not threats), and risk-based (focus on risk to the organization, not just technical vulnerabilities).

OCTAVE has three variants. The original OCTAVE targets large organizations with 100+ people. OCTAVE-S is streamlined for small organizations under 100 people. OCTAVE Allegro is the most streamlined, risk-based approach suitable for any size.

The methodology has three phases. Phase 1 builds asset-based threat profiles by identifying critical assets, defining security requirements for each, identifying threats to each asset, and creating threat profiles. Phase 2 identifies infrastructure vulnerabilities by examining key operational components, reviewing network architecture, and identifying vulnerabilities that enable threats. Phase 3 develops security strategy and plans by assessing risks (probability and impact), prioritizing based on organizational risk tolerance, developing protection strategy, and creating risk mitigation plans.

OCTAVE’s strengths include being business-driven without technical jargon, focusing on organizational impact, involving business stakeholders deeply, being good for enterprise risk management, and being suitable for non-technical audiences. Its weaknesses are that it’s less detailed than technical threat models, requires significant time and participation, may miss technical vulnerabilities, is less suitable for development teams, and isn’t ideal for application security.

OCTAVE works best for enterprise risk assessments, board-level security discussions, organizational security strategy, non-technical stakeholder engagement, and compliance and governance.


LINDDUN Privacy Threat Modeling

LINDDUN is a privacy-specific threat modeling framework developed by researchers at KU Leuven. As STRIDE focuses on security, LINDDUN focuses on privacy. It’s essential for GDPR, CCPA, and other privacy regulations.

Threat CategoryDefinitionExample
LinkabilityAbility to link two or more actions/data to same userTracking users across websites; correlating datasets
IdentifiabilityAbility to identify person behind dataDe-anonymization attacks; PII exposure
Non-repudiationSubject cannot deny action (when they should be able to)Inability to deny purchase; forced digital signatures
DetectabilityDistinguishing whether data existsDetecting someone in a database; profiling users
Disclosure of InformationUnauthorized access to personal informationData breach; insider access to records
UnawarenessSubject unaware of data collection/processingHidden tracking; lack of transparency
Non-complianceViolating privacy regulationsMissing consent; inadequate data protection

To use LINDDUN, you create a Data Flow Diagram (like STRIDE), apply LINDDUN categories to each element, focus on personal data flows, consider the data subject perspective, and map to regulatory requirements like GDPR and CCPA.

LINDDUN Example: User Analytics

For a web analytics service, you’d analyze each category. For Linkability, consider cross-site tracking linking user behavior, or IP addresses linking visits over time. For Identifiability, think about browser fingerprinting identifying users, or email in URL parameters identifying users. For Non-repudiation, examine activity logs users cannot deny or delete. For Detectability, consider whether analytics reveal users in sensitive categories. For Disclosure, look at analytics dashboards exposing personal data or third-party analytics vendor breaches. For Unawareness, examine hidden tracking without notice or consent, or data shared with partners without knowledge. For Non-compliance, consider missing consent mechanisms (GDPR violation) or data retention beyond legal limits.

LINDDUN’s strengths include being privacy-focused (complementing STRIDE), being essential for privacy regulations, considering data subject rights, providing systematic privacy analysis, and having increasing importance with regulations. Its weaknesses are narrower scope than STRIDE, requiring privacy expertise, being less mature than STRIDE, having fewer tools and resources, and potentially overlapping with security threats.

LINDDUN works best for GDPR/CCPA compliance, privacy-sensitive applications, healthcare and financial systems, consumer applications, and any system processing personal data.


Choosing the Right Methodology

Your SituationRecommended MethodologySecondary Methodology
New to threat modelingSTRIDEDREAD (for prioritization)
Enterprise applicationPASTASTRIDE
Need board/executive buy-inPASTA or OCTAVEDREAD
Application developmentSTRIDEAttack Trees
Privacy regulationsLINDDUNSTRIDE
Small team, limited timeSTRIDEAttack Trees
Risk-based decision makingPASTADREAD
Organizational riskOCTAVEPASTA

Expert threat modelers often combine approaches. PASTA + STRIDE uses PASTA’s 7 stages for structure while applying STRIDE in Stage 4 (Threat Analysis), getting the best of both: business context plus systematic threats. STRIDE + Attack Trees uses STRIDE to identify threats and attack trees to model attack paths, providing deep understanding of how attacks succeed. LINDDUN + STRIDE combines STRIDE for security threats with LINDDUN for privacy threats, providing comprehensive coverage for regulated industries. PASTA + DREAD uses PASTA for thorough analysis and DREAD for consistent risk scoring, which is great for stakeholder communication.


The Threat Modeling Process

Regardless of methodology, most threat modeling follows a common high-level process. This section provides a methodology-agnostic workflow.

Phase 1: System Understanding

The objective is building comprehensive knowledge of what you’re protecting.

Create Data Flow Diagrams

DFDs are the foundation of most threat modeling. They visualize external entities (rectangles representing users, external systems, potential attackers), processes (circles or rounded rectangles representing code that processes data), data stores (parallel lines representing databases, files, caches, logs), data flows (arrows showing movement of data between elements), and trust boundaries (dotted lines showing where trust level changes).

A simple example:

                Trust Boundary
                      |
[User] --HTTPS--> (Web App) --SQL--> [Database]
                      |
                  [Log File]

Identify Trust Boundaries

Trust boundaries to identify include network boundaries (Internet to DMZ, DMZ to Internal Network, Internal to Database Network), process boundaries (Web Application to Database, one microservice to another, Application to Operating System), privilege boundaries (User Mode to Kernel Mode, Regular User to Administrator, Application to Database Admin), account boundaries (User A’s Data to User B’s Data, Customer to Internal Employee), physical boundaries (Cloud to On-Premise, Datacenter to Client Device), and organizational boundaries (Your Company to Third-Party Vendor, Your System to Partner System).

The trust boundary principle is this: every trust boundary is a potential attack vector. The less trusted side will attempt to violate assumptions the more trusted side makes.

Document Key Information

For each element in your DFD, document relevant details. For processes, capture purpose and functionality, technology/framework used, input sources and formats, output destinations, authentication mechanisms, authorization model, logging/auditing, and error handling. For data stores, document type (database, file system, cache), sensitivity of data stored, encryption at rest, access controls, backup procedures, and retention policies. For data flows, note protocol (HTTPS, gRPC, message queue), data format (JSON, XML, binary), encryption in transit, authentication method, and volume/frequency. For external entities, record identity (user role, system name), trust level, authentication method, and permissions granted.

Phase 2: Threat Identification

The objective is systematically discovering all credible threats.

Use multiple techniques (covered in Part 2) to identify threats. Apply your chosen framework (STRIDE, PASTA, etc.) for systematic category-by-category analysis that ensures comprehensive coverage. Examine trust boundaries because every boundary is critical; apply the trust discontinuity principle. Use attacker personas including external attackers (opportunistic and targeted), malicious insiders, compromised insiders, nation-state actors, and accidental threats. Review threat libraries like OWASP Top 10, CWE Top 25, MITRE ATT&CK, and industry-specific threats. Conduct collaborative brainstorming with “evil brainstorming” sessions, diverse team perspectives, and no idea too crazy.

Document Each Threat

Create a threat entry with a unique threat ID (like THR-001), a brief descriptive title, the STRIDE category or other classification, a description of what happens in this attack, the attack vector (how it’s executed), assets at risk (what’s being targeted), impact (business and technical consequences), likelihood (probability of occurrence on a 1-5 scale), affected components (which system parts), prerequisites (what the attacker needs), and detection difficulty (how hard to spot on a 1-5 scale).

Phase 3: Risk Analysis

The objective is prioritizing threats based on likelihood and impact.

Risk Scoring

The simple formula is Risk Score = Likelihood × Impact.

For likelihood, use a 1-5 scale where 1 (Rare, 0-10% probability) requires nation-state resources, 2 (Unlikely, 10-30%) is difficult but possible, 3 (Possible, 30-50%) is common with moderate effort, 4 (Likely, 50-70%) occurs frequently, and 5 (Almost Certain, 70-100%) will occur without prevention.

For impact, use a 1-5 scale where 1 (Minimal) means limited impact and easy recovery, 2 (Minor) means some impact and moderate recovery, 3 (Moderate) means significant impact and substantial recovery, 4 (Major) means severe impact, long recovery, and major consequences, and 5 (Critical) means catastrophic with business-ending potential.

Score RangePriorityRequired Timeline
20-25CRITICALImmediate action required
15-19HIGHShort-term action (30 days)
10-14MEDIUMMedium-term action (90 days)
5-9LOWLong-term action (180 days)
1-4MINIMALMonitor, may accept

Quantify Business Impact

Translate technical risks to business language. Financial impact includes fines and penalties (HIPAA, PCI-DSS, GDPR), legal costs and settlements, customer compensation, revenue loss during downtime, cost of incident response, and increased insurance premiums. Operational impact includes system downtime (hours/days), recovery time and effort, staff overtime and emergency response, and business process disruption. Reputational impact includes customer churn rate, brand damage (quantify if possible), competitive disadvantage, and media coverage. Regulatory impact includes compliance violations, mandatory audits, corrective action requirements, and license/certification risks.

Phase 4: Mitigation Planning

The objective is defining how to address each threat.

For each threat, choose one of four strategies. Mitigate (most common) implements controls to reduce risk to acceptable level through preventive controls (stop the attack), detective controls (detect when attack occurs), or corrective controls (respond to attack). Eliminate removes the vulnerability entirely by removing the risky feature, redesigning the system, or changing architecture to avoid the risk. Transfer shifts risk to another party through third-party services with liability, cyber insurance, or contractual transfer to vendors. Accept acknowledges risk and documents the decision, which requires low likelihood plus low impact, cost of mitigation exceeding risk value, business owner sign-off, documented assumptions and conditions, and a set review date.

For each mitigation, document a mitigation ID addressing the specific threat, the mitigation type, description, technical implementation details, additional controls, owner, timeline, cost, priority, verification method, and residual risk after mitigation.

Phase 5: Validation & Review

The objective is verifying the threat model is comprehensive and accurate.

Validation techniques include peer review (another security professional reviews with fresh perspective to find gaps and challenge assumptions), red team perspective (asking “what would an attacker do?”, simulating attack scenarios, finding overlooked threats), comparison to similar systems (reviewing published threat models, studying breaches in similar systems, applying industry best practices), penetration testing (validating technical threats with real attacks, confirming detection mechanisms work, testing mitigation effectiveness), tabletop exercises (walking through attack scenarios with the team, testing incident response procedures, identifying process gaps), and stakeholder sign-off (security team approval, architecture review, business acceptance of residual risks, compliance review).

Review Checklist

For threat model completeness, verify all system components are represented in the DFD, all trust boundaries are identified and analyzed, all data flows are documented, all sensitive assets are identified, STRIDE (or chosen framework) is applied to all elements, multiple attacker personas are considered, industry-specific threats are included, third-party integration threats are addressed, insider threats are considered, and physical security (if applicable) is addressed.

For threat quality, verify each threat has a clear attack scenario, likelihood and impact are justified, business impact is quantified, affected components are identified, and detection methods are specified.

For mitigation quality, verify all critical/high risks have mitigations, owners are assigned to each mitigation, timelines are realistic and approved, verification methods are defined, and residual risk is calculated and acceptable.

For documentation, verify diagrams are clear and understandable, assumptions are documented, risk acceptance forms are signed, action items are tracked, and a review schedule is established.

Phase 6: Maintenance

The objective is keeping the threat model current and relevant.

When to Update

Mandatory updates are required for new features or major changes, architecture modifications, new third-party integrations, technology stack changes, after security incidents, and regulatory requirement changes.

Periodic reviews should occur quarterly for critical systems, semi-annually for high-risk systems, annually for standard systems, and biennially for low-risk systems.

Maintenance Activities

Threat intelligence integration involves monitoring new attack techniques, tracking CVEs in your technology stack, following security researchers, and reviewing breach reports in your industry. Incident correlation maps real incidents to the threat model; if an incident wasn’t in the model, ask why not, update methodology to catch similar issues, and share lessons learned. Technology updates involve new framework versions, new security features, deprecated technologies, and cloud service changes. Assumption validation revisits documented assumptions to check if they’re still true, whether conditions have changed, and whether threats need updating. Mitigation verification checks whether mitigations are still in place, still effective, have been bypassed, or need updates.

Continuous Improvement

Learn from each threat modeling exercise by asking what threats you missed and why, what techniques were most effective, what took too long and how to streamline, what stakeholders should be involved next time, what tools would help, and what training is needed.

Build organizational knowledge by creating threat libraries, documenting patterns and anti-patterns, training more team members, sharing findings across teams, and contributing to the community.


Summary - Part 1

This concludes Part 1: Introduction & Methodologies. You now understand what threat modeling is and why it’s critical for security, when to threat model and when to update, five major methodologies (STRIDE, PASTA, DREAD, OCTAVE, LINDDUN), the six-phase threat modeling process that works with any methodology, and how to choose the right methodology for your situation.

Continue to Part 2: Threat Enumeration Techniques to learn 10+ specific techniques for discovering threats systematically, then proceed to Part 3 for the complete real-world healthcare example showing all techniques in practice.


Part 1 Complete | Continue to Part 2 →