SC-031: OAuth Token Abuse — Operation Consent Trap¶
Scenario Header
Type: Cloud / Identity | Difficulty: ★★★★☆ | Duration: 3–4 hours | Participants: 4–8
Threat Actor: VELVET HOOK — espionage group specializing in OAuth/cloud identity abuse for persistent corporate espionage
Primary ATT&CK Techniques: T1550.001 · T1098.003 · T1114 · T1566.002 · T1528 · T1537
Facilitator Note
This scenario covers malicious OAuth application consent phishing — a growing attack vector that bypasses MFA and traditional email security controls. Participants should include cloud security engineers, identity/IAM administrators, SOC analysts, and M365 administrators. The scenario highlights the gap between perimeter security and cloud identity security. All data is synthetic. All organizations, IPs, and indicators are fictional.
Threat Actor Profile¶
VELVET HOOK is an espionage-motivated threat group tracked since early 2025, specializing in OAuth and cloud identity abuse against professional services firms, law firms, and consulting companies. The group targets organizations that handle sensitive client data — mergers and acquisitions, litigation, government contracts — using the stolen data for economic espionage and insider trading.
VELVET HOOK's signature technique is illicit consent grant attacks: they register malicious Azure AD/Entra ID applications that masquerade as legitimate productivity tools, then distribute consent phishing emails to target employees. When a victim grants consent, the malicious app receives OAuth tokens with delegated permissions to read email, access OneDrive, and enumerate directory information — all without requiring the victim's password or MFA token. The OAuth refresh tokens provide persistent access for up to 90 days (or until revoked), surviving password resets.
Motivation: Economic espionage — theft of M&A documentation, legal strategies, government contract proposals, and client financial data. Estimated intelligence value per operation: $5M–$50M (based on data sensitivity). The group has been linked to operations targeting firms involved in cross-border acquisitions and defense contracting.
Scenario Narrative¶
Scenario Context
Horizon Consulting Group is a mid-sized professional services firm with 450 employees across 3 offices, specializing in management consulting for government and defense clients. They use Microsoft 365 E5 with Entra ID for identity, Exchange Online for email, OneDrive/SharePoint for document management, and Teams for collaboration. The firm handles sensitive pre-decisional documents including government contract proposals, M&A due diligence reports, and defense sector strategy papers. Horizon's IT team is 6 people; they have 1 dedicated security analyst who primarily manages endpoint protection. Azure AD app consent policy: users can consent to apps requesting delegated permissions — admin consent is only required for application-level permissions. No third-party CASB or cloud security posture management (CSPM) tool is deployed.
Phase 1 — Malicious App Registration & Phishing Campaign (~30 min)¶
VELVET HOOK registers a malicious OAuth application in a separate Entra ID tenant they control (velvet-apps.onmicrosoft.com). The application is named "SecureDoc Viewer Pro" with a convincing publisher name, logo, and privacy policy URL. The app requests the following delegated permissions:
Mail.Read— Read user mailboxFiles.Read.All— Read all files the user can accessUser.Read— Read user profileContacts.Read— Read user contactsoffline_access— Maintain access (refresh tokens)
The group crafts a consent phishing campaign targeting 85 Horizon employees. The phishing emails impersonate Horizon's IT department, claiming that a new "secure document viewer" is required to access a shared compliance report. The email contains a link to the Microsoft consent prompt:
https://login.microsoftonline.com/common/oauth2/v2.0/authorize?client_id=<malicious-app-id>&redirect_uri=...&scope=Mail.Read+Files.Read.All+User.Read+Contacts.Read+offline_access
Because the link goes to login.microsoftonline.com (Microsoft's legitimate OAuth endpoint), email security gateways do not flag it as malicious. The consent page shows Microsoft's standard "This app wants to access your data" prompt.
Evidence Artifacts:
| Artifact | Detail |
|---|---|
| Entra ID App Registration (attacker tenant) | App: "SecureDoc Viewer Pro" — App ID: a1b2c3d4-e5f6-7890-abcd-ef1234567890 — Tenant: velvet-apps.onmicrosoft.com — Created: 2026-03-10T08:22:00Z — Redirect URI: https://securedoc-viewer.example.com/callback |
| Phishing Email | From: it-support@horizon-notifications.example.com (spoofed) — Subject: "ACTION REQUIRED: Install SecureDoc Viewer for Compliance Report Access" — Body: "IT has deployed a new secure document viewer..." — Link: login.microsoftonline.com/common/oauth2/v2.0/authorize?... |
| Email Gateway | 85 emails delivered — SPF: PASS (attacker controls horizon-notifications.example.com) — DKIM: PASS — DMARC: N/A (Horizon's DMARC is p=none) — URL: login.microsoftonline.com — Verdict: CLEAN (legitimate Microsoft domain) |
| DMARC Policy | _dmarc.horizon-consulting.example.com — v=DMARC1; p=none; rua=... — Policy: none (monitor only, no enforcement) — DMARC enforcement would not have helped (attacker used different domain) |
Phase 1 — Discussion Inject
Technical: The phishing link points to login.microsoftonline.com — a legitimate Microsoft domain — so email security cannot detect it as malicious based on URL reputation. What detection strategies exist for consent phishing? Consider: user awareness training on OAuth consent prompts, restricting app consent to admin-only (Entra ID setting), pre-approved app allowlists, and monitoring Entra ID audit logs for consent grants to unknown applications.
Decision: Horizon's current policy allows users to consent to apps requesting delegated permissions. Changing to admin-only consent would require IT to review and approve every third-party app integration — potentially creating a bottleneck for 450 employees. Do you (A) enforce admin-only consent immediately, or (B) implement a risk-based policy where users can consent to low-risk permissions but admin approval is required for mail/file access?
Expected Analyst Actions:
- [ ] Identify the phishing campaign — analyze email headers, sender domain, and URL parameters
- [ ] Extract the malicious app client_id from the OAuth authorization URL
- [ ] Check Entra ID audit logs for any consent grants to the identified app ID
- [ ] Review the app's requested permissions — assess risk level
- [ ] Investigate the redirect URI domain (
securedoc-viewer.example.com) - [ ] Check if Horizon's app consent policy permits user-level consent for these permissions
Phase 2 — Consent Grants & Token Harvesting (~30 min)¶
Of the 85 employees who received the phishing email, 23 click the link and are presented with Microsoft's OAuth consent prompt. 14 employees grant consent — they see the Microsoft login page (already authenticated via SSO), review the permission list, and click "Accept." The malicious app receives an authorization code, which VELVET HOOK exchanges for access tokens and refresh tokens via the standard OAuth 2.0 token endpoint.
Key victims include:
m.chen@horizon-consulting.example.com— Senior Partner, M&A Practice (OneDrive contains deal documents)s.williams@horizon-consulting.example.com— Defense Sector Lead (mailbox contains classified contract discussions)j.patel@horizon-consulting.example.com— Finance Director (access to financial projections and client billing)
The refresh tokens have a default lifetime of 90 days and are not invalidated by password changes — only by explicitly revoking the app's consent or the specific refresh token.
Evidence Artifacts:
| Artifact | Detail |
|---|---|
| Entra ID Audit Log | 2026-03-12T14:23:00Z — Activity: "Consent to application" — App: "SecureDoc Viewer Pro" (a1b2c3d4...) — User: m.chen@horizon-consulting.example.com — Permissions: Mail.Read, Files.Read.All, User.Read, Contacts.Read, offline_access — IP: 192.0.2.60 (corporate office) |
| Entra ID Audit Log | 14 consent events total — 2026-03-12T14:23Z to 2026-03-12T16:45Z — All from corporate IP range 192.0.2.60/28 — All granted full requested permissions |
| Entra ID Sign-in Log | App: "SecureDoc Viewer Pro" — Sign-in type: "Service principal" — No interactive sign-in required after initial consent — Token refresh from 198.51.100.200 (attacker infrastructure) |
| Token Exchange Log | 2026-03-12T14:23:15Z — Token endpoint: login.microsoftonline.com/.../token — Grant type: authorization_code — Client ID: a1b2c3d4... — Redirect URI: securedoc-viewer.example.com/callback — Response: access_token + refresh_token issued |
| Microsoft Graph API | First API call from app: 2026-03-12T14:24:00Z — GET /me — User: m.chen — Source IP: 198.51.100.200 — Followed by GET /me/messages and GET /me/drive/root/children |
Phase 2 — Discussion Inject
Technical: 14 out of 23 employees who saw the consent prompt clicked "Accept" (60.8% consent rate). What factors influence consent decisions? Consider: the prompt comes from Microsoft (trusted), the app name sounds legitimate, users are conditioned to click "Accept" on permission prompts, and there is no visual indicator of app trustworthiness. How would Entra ID's "admin consent workflow" change this dynamic?
Decision: You discover that refresh tokens survive password resets. An employee who consented can change their password, but the malicious app retains access via the refresh token. This is by design in OAuth 2.0. How do you communicate this to non-technical staff? What is the correct remediation — and why is "change your password" insufficient?
Expected Analyst Actions:
- [ ] Query Entra ID audit logs for all consent grants to app ID
a1b2c3d4... - [ ] Identify all 14 affected users — assess data sensitivity for each user's mailbox and OneDrive
- [ ] Check Entra ID sign-in logs for service principal sign-ins from the malicious app
- [ ] Review Microsoft Graph API audit logs for data access patterns by the malicious app
- [ ] Determine if any admin-level users granted consent (which would affect all users)
- [ ] Assess whether refresh tokens have been used from attacker infrastructure IPs
Phase 3 — Mailbox & OneDrive Exfiltration (~45 min)¶
Using the harvested OAuth tokens, VELVET HOOK systematically accesses the mailboxes and OneDrive accounts of all 14 compromised users via the Microsoft Graph API. The exfiltration is conducted from cloud infrastructure at 198.51.100.200 and 198.51.100.201, using rate-limited API calls to avoid triggering throttling alerts.
Over 5 days, the group exfiltrates:
- 4,721 emails from 14 mailboxes — focusing on emails containing keywords: "acquisition," "merger," "classified," "ITAR," "proposal," "confidential"
- 892 OneDrive files (3.2 GB) — including M&A due diligence documents, defense contract proposals, client financial models, and board presentation decks
- Contact lists for all 14 users — 2,847 unique contacts including government officials and defense contractor executives
The API calls use legitimate Microsoft Graph endpoints (graph.microsoft.com), making network-level detection impossible — the traffic appears identical to normal M365 application activity.
Evidence Artifacts:
| Artifact | Detail |
|---|---|
| Microsoft Graph API Log | App: a1b2c3d4... — 2026-03-12T14:24Z to 2026-03-17T22:00Z — Total API calls: 47,832 — Endpoints: /me/messages, /me/messages/{id}, /me/drive/root/children, /me/drive/items/{id}/content, /me/contacts |
| Unified Audit Log (Exchange) | MailItemsAccessed — App: "SecureDoc Viewer Pro" — User: m.chen — 1,247 items accessed — Source IP: 198.51.100.200 — 2026-03-12 to 2026-03-17 |
| Unified Audit Log (OneDrive) | FileDownloaded — App: "SecureDoc Viewer Pro" — User: m.chen — 312 files — Total: 847 MB — Includes: M&A_DueDiligence_ProjectAlpha.xlsx, DefenseContract_Proposal_v3.docx |
| API Rate Pattern | Avg 400 API calls/hour (below 10,000/hour throttle limit) — Consistent 24/7 activity pattern — No human interaction pattern (no weekday/weekend variance) |
| Sensitive File List (sample) | ProjectAlpha_Valuation_Model.xlsx — ITAR_Compliance_Assessment.pdf — BoardDeck_Q1_2026_CONFIDENTIAL.pptx — GovContract_Proposal_FOUO.docx — ClientList_DefenseSector.xlsx |
Phase 3 — Discussion Inject
Technical: The exfiltration uses legitimate Microsoft Graph API calls — there is no malware, no C2 channel, and no data leaving through unusual ports. How do you detect this? Consider: monitoring Entra ID service principal sign-ins from unusual IPs, tracking Unified Audit Log MailItemsAccessed events by application, anomaly detection on Graph API call volume per app, and CASB solutions that baseline normal app behavior.
Decision: The exfiltrated data includes ITAR-controlled defense contract proposals and pre-decisional M&A documents. This triggers multiple regulatory notification requirements: ITAR violation reporting (DDTC), SEC insider trading concerns (M&A data), and potential espionage referral to FBI CI. How do you prioritize these notifications? Who leads — legal, CISO, or external counsel?
Expected Analyst Actions:
- [ ] Query Unified Audit Log for all
MailItemsAccessedandFileDownloadedevents by app ID - [ ] Identify all files and emails accessed — classify by sensitivity level
- [ ] Map API call source IPs — compare against known corporate and legitimate Microsoft IPs
- [ ] Assess whether any ITAR-controlled, classified, or legally privileged data was exfiltrated
- [ ] Calculate total data exfiltration volume and timeline
- [ ] Determine if the attacker used the contact lists to expand targeting to other organizations
Phase 4 — Persistence & Lateral Expansion (~30 min)¶
VELVET HOOK establishes additional persistence mechanisms beyond the initial OAuth tokens. Using the Contacts.Read permission and email content from compromised accounts, the group identifies high-value targets at Horizon's client organizations. They craft new consent phishing emails sent from m.chen's legitimate mailbox (using the Mail.Send permission — which was not explicitly requested but is implied by some OAuth scopes in certain configurations).
Wait — the app only requested Mail.Read, not Mail.Send. VELVET HOOK pivots: they use information from m.chen's emails to craft highly convincing spear-phishing emails from a lookalike domain (horizon-consultlng.example.com — "l" replaced with "l", visually identical in many fonts). The emails target 3 defense contractor contacts found in the exfiltrated data.
Additionally, the group registers 2 more malicious apps with slightly different names ("DocuSign Integration Helper" and "Teams Meeting Analyzer") and targets the same 14 users with new consent phishing emails — diversifying persistence across multiple app registrations.
Evidence Artifacts:
| Artifact | Detail |
|---|---|
| Entra ID Audit Log | 2026-03-18T09:15:00Z — New consent grant — App: "DocuSign Integration Helper" (b2c3d4e5...) — User: m.chen — Permissions: Mail.Read, Mail.Send, Files.ReadWrite.All — Escalated permissions: Mail.Send + Files.ReadWrite |
| Entra ID Audit Log | 2026-03-18T10:22:00Z — New consent grant — App: "Teams Meeting Analyzer" (c3d4e5f6...) — User: s.williams — Permissions: Mail.Read, Calendars.Read, User.Read.All |
| Domain Registration | horizon-consultlng.example.com — Registered: 2026-03-17 — Registrar: bulletproof provider — Typosquat of horizon-consulting.example.com — SPF/DKIM configured |
| External Phishing (sample) | From: m.chen@horizon-consultlng.example.com — To: d.martinez@defense-contractor.example.com — Subject: "Re: Project Alpha Update — Q1 Review Deck Attached" — Attachment: ProjectAlpha_Q1_Review.docx (macro-enabled) |
| Graph API Log | App: b2c3d4e5... — Mail.Send — 3 emails sent from m.chen's mailbox to external defense contractor contacts — 2026-03-19T11:00Z |
Phase 4 — Discussion Inject
Technical: The second malicious app ("DocuSign Integration Helper") requested Mail.Send and Files.ReadWrite.All — escalated permissions compared to the original app. If admin consent were required for Mail.Send, would this have been caught? What is the principle of least privilege applied to OAuth scopes? How do you monitor for permission escalation across multiple app registrations targeting the same users?
Decision: The attacker is now using compromised user identity to send phishing to defense contractor contacts. This expands the incident beyond Horizon to their clients. Do you (A) notify affected defense contractors immediately — revealing the breach to clients and risking contract termination, or (B) contain silently first — revoking tokens and app consents before notifying clients? What are the legal and ethical obligations?
Expected Analyst Actions:
- [ ] Search Entra ID audit logs for ALL consent grants in the past 30 days — identify all malicious apps
- [ ] Revoke consent for all identified malicious apps across all affected users
- [ ] Invalidate all refresh tokens for the 14 affected users (Revoke-AzureADUserAllRefreshToken)
- [ ] Monitor for typosquat domains targeting Horizon's brand
- [ ] Notify defense contractor contacts who received phishing from the attacker
- [ ] Block the malicious app IDs at the Entra ID level (enterprise app block list)
Phase 5 — Detection & Containment (~30 min)¶
The attack is discovered on Day 8 when Horizon's security analyst reviews the weekly Entra ID sign-in report and notices service principal sign-ins for "SecureDoc Viewer Pro" — an app no one in IT recognizes. The analyst investigates and discovers the consent grants, API access patterns, and data exfiltration.
Evidence Artifacts:
| Artifact | Detail |
|---|---|
| Detection Trigger | Weekly Entra ID sign-in review — 2026-03-20T09:00Z — Analyst noticed: "SecureDoc Viewer Pro" — 47,832 Graph API calls in 8 days — Not in IT-approved app inventory |
| Entra ID Enterprise Apps | 3 malicious apps identified: "SecureDoc Viewer Pro" (a1b2c3d4...), "DocuSign Integration Helper" (b2c3d4e5...), "Teams Meeting Analyzer" (c3d4e5f6...) — Total affected users: 14 — Total consent grants: 18 |
| Token Revocation | 2026-03-20T10:30Z — All refresh tokens revoked for 14 users — All 3 malicious app registrations blocked — Enterprise app consent revoked |
| Incident Timeline | Initial compromise: 2026-03-12T14:23Z — Detection: 2026-03-20T09:00Z — Dwell time: 7 days 18 hours — Data exfiltrated: 4,721 emails + 892 files (3.2 GB) |
| Post-Containment Check | 2026-03-20T11:00Z — Graph API calls from malicious apps: 0 (tokens revoked successfully) — No new consent grants detected |
Phase 5 — Discussion Inject
Technical: Detection took 8 days and was manual — a weekly report review. What automated detection would have caught this sooner? Consider: real-time alerting on consent grants to unrecognized apps, anomaly detection on Graph API call volume per service principal, geo-impossible token usage (consent from corporate IP, API calls from attacker IP), and integration with Microsoft Defender for Cloud Apps.
Decision: You have revoked all tokens and blocked the malicious apps. However, the attacker has already exfiltrated 3.2 GB of sensitive data including ITAR-controlled documents. Containment stops further access but does not recover the data. What is your notification and damage assessment plan? How do you determine what the attacker will do with the stolen data?
Expected Analyst Actions:
- [ ] Block all 3 malicious app IDs in Entra ID enterprise application settings
- [ ] Revoke all refresh tokens and sign-in sessions for all 14 affected users
- [ ] Change app consent policy to admin-only or implement admin consent workflow
- [ ] Conduct full audit of all enterprise apps with consent grants — remove any unrecognized apps
- [ ] Review Unified Audit Log for complete data access timeline and scope
- [ ] Enable real-time alerting for new app consent grants
Phase 6 — Eradication & Hardening (~30 min)¶
Horizon's incident response team, augmented by external cloud security consultants, conducts a comprehensive remediation effort. The focus extends beyond removing the immediate threat to hardening the M365 environment against future OAuth-based attacks.
Evidence Artifacts:
| Artifact | Detail |
|---|---|
| Remediation Actions | 3 malicious apps blocked and consent revoked — 14 user tokens revoked — App consent policy changed to admin-only — 47 existing third-party app consents audited (3 additional suspicious apps removed) |
| Policy Changes | Entra ID: User consent disabled — Admin consent workflow enabled — App governance policy: only verified publisher apps permitted — Conditional access: block service principal sign-ins from non-approved IP ranges |
| ITAR Notification | DDTC voluntary disclosure filed: 2026-03-21 — Potential unauthorized access to ITAR-controlled technical data — External counsel engaged |
| Client Notification | 3 defense contractor clients notified of potential exposure — Horizon's M&A client (Project Alpha) notified — 2 clients initiated contract review |
| Security Tooling | Microsoft Defender for Cloud Apps deployed — App governance alerts configured — Continuous monitoring of Graph API activity by service principals |
Phase 6 — Discussion Inject
Technical: Entra ID now has "admin consent workflow" — users can request app access, but an admin must approve. What review criteria should the admin use when evaluating consent requests? Consider: publisher verification status, permissions requested (read vs. write vs. send), redirect URI domain reputation, and whether the app is listed in the Microsoft app gallery.
Decision: Two defense contractor clients are reviewing their contracts with Horizon after the breach. The ITAR voluntary disclosure may result in penalties. Total estimated cost: $2.5M (incident response) + $5M–$15M (client churn and ITAR penalties) + $50M+ (potential loss of defense contracts). How does this cost compare to the $15K/year for CASB + app governance tooling that would have prevented the attack?
Expected Analyst Actions:
- [ ] Audit all enterprise application consent grants across the entire tenant
- [ ] Implement admin-only consent policy with documented approval workflow
- [ ] Deploy Microsoft Defender for Cloud Apps or equivalent CASB
- [ ] Configure real-time alerts for: new app consent, high-volume Graph API access, service principal sign-ins from new IPs
- [ ] Conduct security awareness training focused on OAuth consent phishing
- [ ] Review and enforce DMARC policy (
p=reject) for all Horizon domains
Detection Opportunities¶
KQL Detection Queries¶
// Detect new OAuth app consent grants
AuditLogs
| where OperationName == "Consent to application"
| extend AppName = tostring(TargetResources[0].displayName)
| extend AppId = tostring(TargetResources[0].id)
| extend UserGranting = tostring(InitiatedBy.user.userPrincipalName)
| extend Permissions = tostring(AdditionalDetails)
| where AppId !in ("known-app-id-1", "known-app-id-2") // allowlisted apps
| project TimeGenerated, UserGranting, AppName, AppId, Permissions
| sort by TimeGenerated desc
// Detect excessive Graph API calls by service principal
AADServicePrincipalSignInLogs
| where ResultType == 0
| summarize APICallCount=count(), DistinctUsers=dcount(ServicePrincipalName) by AppId, AppDisplayName, IPAddress, bin(TimeGenerated, 1h)
| where APICallCount > 500
| extend Alert = strcat("High-volume API activity: ", AppDisplayName, " — ", APICallCount, " calls/hr")
// Detect MailItemsAccessed by non-standard applications
OfficeActivity
| where Operation == "MailItemsAccessed"
| where ClientAppId !in ("known-outlook-appid", "known-mobile-appid")
| summarize MailsAccessed=count(), DistinctUsers=dcount(UserId) by ClientAppId, AppDisplayName=ClientInfoString, bin(TimeGenerated, 1h)
| where MailsAccessed > 100
| extend Alert = "Non-standard app accessing mailbox at high volume"
// Detect consent phishing URLs in email
EmailUrlInfo
| where Url has "login.microsoftonline.com" and Url has "oauth2" and Url has "authorize"
| extend ClientId = extract("client_id=([a-f0-9-]+)", 1, Url)
| where ClientId !in ("known-app-id-1", "known-app-id-2")
| join kind=inner EmailEvents on NetworkMessageId
| project TimeGenerated, SenderFromAddress, RecipientEmailAddress, Subject, ClientId, Url
Splunk (SPL) Detection Queries¶
// OAuth consent grant detection
index=azure sourcetype=azure_audit operationName="Consent to application"
| spath output=app_name path=targetResources{}.displayName
| spath output=app_id path=targetResources{}.id
| spath output=user path=initiatedBy.user.userPrincipalName
| where NOT app_id IN ("known-app-id-1", "known-app-id-2")
| table _time, user, app_name, app_id
| eval alert="New OAuth app consent — investigate if authorized"
// High-volume Graph API access by service principal
index=azure sourcetype=azure_signin loginType="servicePrincipal" resultType=0
| bucket _time span=1h
| stats count as api_calls dc(resourceDisplayName) as resources by appDisplayName, appId, ipAddress, _time
| where api_calls > 500
| eval alert="Excessive API calls from service principal: ".appDisplayName
// MailItemsAccessed by unrecognized app
index=o365 sourcetype=o365_management_activity Operation=MailItemsAccessed
| where NOT ClientAppId IN ("known-outlook-id", "known-mobile-id")
| stats count as items_accessed dc(UserId) as affected_users by ClientAppId, ClientInfoString
| where items_accessed > 100
| eval severity="HIGH"
| eval alert="Unrecognized app accessing mailboxes — possible OAuth token abuse"
// Detect multiple consent grants in short timeframe (campaign indicator)
index=azure sourcetype=azure_audit operationName="Consent to application"
| spath output=app_id path=targetResources{}.displayName
| bucket _time span=4h
| stats dc(initiatedBy.user.userPrincipalName) as consenting_users count as consent_events by app_id, _time
| where consenting_users >= 3
| eval alert="Multiple users consenting to same app — possible consent phishing campaign"
Incident Response Checklist¶
Immediate Actions (0–2 hours)¶
- [ ] Identify all malicious app IDs from Entra ID audit logs
- [ ] Block malicious apps at the enterprise application level in Entra ID
- [ ] Revoke all refresh tokens for affected users (
Revoke-AzureADUserAllRefreshToken) - [ ] Revoke app consent grants for all affected users
- [ ] Disable user-level app consent (switch to admin-only or admin consent workflow)
- [ ] Force re-authentication for all affected users
Short-Term Actions (2–48 hours)¶
- [ ] Audit all enterprise application consent grants across the entire tenant
- [ ] Review Unified Audit Log for data access by malicious apps — determine exfiltration scope
- [ ] Classify all accessed/exfiltrated data by sensitivity (ITAR, PII, financial, privileged)
- [ ] Notify legal counsel — assess regulatory reporting obligations (ITAR, SEC, state breach laws)
- [ ] Check for typosquat domains impersonating the organization
- [ ] Notify external contacts who received phishing from compromised accounts
Long-Term Actions (1–4 weeks)¶
- [ ] Deploy CASB (Microsoft Defender for Cloud Apps or equivalent)
- [ ] Configure real-time app governance alerts
- [ ] Implement conditional access policies for service principal sign-ins
- [ ] Conduct OAuth consent phishing awareness training for all staff
- [ ] Enforce DMARC
p=rejectfor all organizational domains - [ ] Establish quarterly review of all enterprise app consent grants
- [ ] Implement app verification requirements (verified publisher only)
Lessons Learned¶
What Went Wrong¶
| Gap | Detail | Remediation |
|---|---|---|
| User-level app consent allowed | Any user could grant OAuth permissions to any app without admin review | Change to admin-only consent or implement admin consent workflow |
| No app governance monitoring | No alerting on new consent grants or unusual API access patterns | Deploy CASB with app governance, configure real-time alerts |
| DMARC not enforced | p=none policy allowed spoofed emails without rejection | Enforce p=reject — though this specific attack used a different domain |
| No Graph API monitoring | 47,832 API calls over 8 days went undetected | Monitor service principal sign-ins and API volume anomalies |
| Weekly manual review | Detection relied on analyst manually reviewing weekly reports — 8-day dwell time | Implement automated real-time detection and alerting |
| No verified publisher requirement | Malicious apps with arbitrary names accepted without verification | Require verified publisher status for all third-party apps |
| Insufficient data classification | ITAR and M&A documents stored without DLP labels — could not assess exfiltration impact quickly | Implement Microsoft Information Protection labels and DLP policies |
What Went Right¶
| Control | Impact |
|---|---|
| Entra ID audit logging | Provided complete forensic timeline of consent grants and API access |
| Unified Audit Log retention | MailItemsAccessed and FileDownloaded events available for full attack window |
| Analyst vigilance | Weekly review eventually caught the anomalous app — despite being manual |
| Token revocation effectiveness | Once malicious apps were blocked and tokens revoked, access terminated immediately |
ATT&CK Navigator Mapping¶
| Technique ID | Technique Name | Phase |
|---|---|---|
| T1566.002 | Phishing: Spearphishing Link | Initial Access |
| T1528 | Steal Application Access Token | Credential Access |
| T1550.001 | Use Alternate Authentication Material: Application Access Token | Defense Evasion |
| T1098.003 | Account Manipulation: Additional Cloud Roles | Persistence |
| T1114.002 | Email Collection: Remote Email Collection | Collection |
| T1530 | Data from Cloud Storage Object | Collection |
| T1537 | Transfer Data to Cloud Account | Exfiltration |
Related Chapters¶
- Chapter 33 — Identity & Access Security — OAuth 2.0, Entra ID, conditional access, token management
- Chapter 20 — Cloud Attack & Defense — Cloud identity attacks, M365 security, CASB deployment
- Chapter 44 — Web Application Pentesting — OAuth vulnerabilities, consent phishing, application security
Scenario Debrief
Operation Consent Trap demonstrates the power and subtlety of OAuth-based attacks. Unlike traditional credential theft, OAuth token abuse bypasses MFA entirely, survives password resets, and uses legitimate Microsoft infrastructure for data exfiltration — making network-level detection impossible. The attack surface is the OAuth consent mechanism itself: a single click on "Accept" grants persistent access to email and files without any malware deployment. Defense requires restricting app consent to administrators, monitoring Entra ID audit logs for consent grants, deploying CASB with app governance, and training users to recognize OAuth consent prompts as a security decision — not just another "Accept" dialog.