Skip to content

Incident Response Plan

Defines procedures for detecting, responding to, and recovering from security incidents affecting Meridian Seven systems, data, and services.

All security incidents including: unauthorized access, data breaches, service outages, malware, DoS attacks, insider threats, supply chain compromises, and vulnerability exploitation.

Confirmed data breach involving customer data, complete production outage, active exploitation of a critical vulnerability, or ransomware.

MetricTarget
Response time15 minutes
Triage1 hour
Containment4 hours
Resolution24 hours
Postmortem due5 business days

Notify: CTO, CISO, all engineering, legal counsel, affected customers, regulators (if applicable).

Partial production outage, confirmed compromise without known data exfiltration, unauthorized access to internal systems, or critical vulnerability with active exploitation in the wild.

MetricTarget
Response time1 hour
Triage2 hours
Containment8 hours
Resolution48 hours
Postmortem due5 business days

Notify: CTO, CISO, affected engineering teams, affected customers (if service impact is visible).

Degraded performance, minor security event (failed brute-force, single-account compromise), high-severity vulnerability requiring remediation, or non-critical outage.

MetricTarget
Response time24 hours
Triage48 hours
Containment5 business days
Resolution10 business days
Postmortem dueOptional (CISO discretion)

Notify: CISO, relevant engineering team.

Suspicious but unconfirmed activity, low-severity vulnerability, or policy violation without security impact.

MetricTarget
Response time72 hours
Triage5 business days
ResolutionNext sprint or 30 days

Notify: CISO for tracking only.

SourceWhat It DetectsAlert Channel
CrowdStrikeEndpoint threats, malware, suspicious processes, lateral movementCrowdStrike console, weekly review
GCP Cloud MonitoringUptime check failures, application errors, anomalous traffic, alert policy violations, SSL issuesSlack notification channel (#security-alerts)
DependabotVulnerable dependenciesGitHub notifications, weekly security review
Google WorkspaceSuspicious sign-in, admin changes, DLP violationsGoogle Admin alerts, email
1Password WatchtowerCompromised credentials, weak passwords, expiring certs1Password dashboard, weekly review
User ReportsPhishing, suspicious emails, unusual behaviorSlack #security-alerts, email to CISO
RolePersonContact
Primary On-CallCTOGCP Cloud Monitoring alert via Slack notification channel
Escalation BackupCISONotified via Slack if no ack in 15 min

Escalation sequence:

  1. 0 min: GCP Cloud Monitoring alert policy fires, Slack notification channel posts to #incidents, CTO notified
  2. 15 min (no ack): CISO notified via Slack
  3. 30 min (no ack): Both CTO and CISO notified simultaneously
  4. Upon ack: incident commander updates the #incidents thread with acknowledgment status

SEV-1/2: all stakeholders notified within 24 hours. SEV-1 with customer data access: initiate breach notification per Section 8.

Any person who detects or suspects an incident reports immediately to the CISO via Slack #security-alerts or email. Monitoring-detected incidents are surfaced via GCP Cloud Monitoring alert policies to Slack. User-reported incidents are logged as GitHub Issues using the Incident Response template as fallback.

  1. CISO (or designated incident commander) confirms severity (GCP Cloud Monitoring alert policies include severity labels; override if warranted)
  2. Incident commander assembles response team:
    • SEV-1/2: CISO, CTO, relevant engineering leads, legal (if data breach)
    • SEV-3: CISO, relevant engineer(s)
    • SEV-4: CISO tracks only
  3. Incident Report created from template
  4. Dedicated Slack channel created for SEV-1/2: #incident-YYYY-MM-DD-brief

Immediate (stop the bleeding):

  • Revoke compromised credentials; rotate secrets via Doppler
  • Isolate affected systems (disable network access, suspend services)
  • Block malicious IPs/domains via Cloudflare WAF
  • Suspend compromised user accounts
  • Enable enhanced logging on affected systems

Short-term (stabilize):

  • Deploy temporary fixes or patches
  • Implement additional monitoring on the attack vector
  • Redirect traffic away from affected systems if needed
  • Preserve forensic evidence before making changes (snapshots, log exports)
  1. Collect and preserve:
    • Application and infrastructure logs (GCP Cloud Logging — Cloud Run, Cloudflare)
    • Endpoint telemetry (CrowdStrike)
    • Access logs (Google Workspace admin audit, GitHub audit)
    • Database audit logs (Supabase)
    • Network and WAF logs (Cloudflare)
  2. Establish timeline of events
  3. Identify attack vector and scope of compromise
  4. Determine what data (if any) was accessed or exfiltrated
  5. Identify root cause
  1. Remove root cause (patch vulnerability, remove malware, close access vector)
  2. Rotate all potentially compromised credentials
  3. Restore from known-good backups if necessary
  4. Verify system integrity before returning to service
  5. Phased recovery with enhanced monitoring
  6. Confirm normal operations and close the incident
AudienceSEV-1SEV-2SEV-3SEV-4
CTOImmediate (phone)Immediate (Slack)Daily summaryWeekly summary
EngineeringImmediate (Slack)Within 1 hourAs neededN/A
All StaffWithin 4 hoursWithin 24 hoursN/AN/A
LegalImmediate (phone)Within 4 hoursN/AN/A
AudienceWhenMethodResponsible
Affected CustomersWithin 72 hours of confirmed breachEmail, status pageCTO + Legal
RegulatorsPer regulation (GDPR: 72 hours)Formal written noticeLegal + CISO
Law EnforcementIf criminal activity suspectedFormal reportLegal + CISO
General PublicOnly if required by regulationStatus page, blog postCTO
  • Only CTO or CISO may authorize external communications about security incidents
  • All external communications reviewed by legal counsel before release
  • Do not speculate about scope or cause in early communications
  • Update GCP Cloud Monitoring status dashboard for any customer-visible impact
  • Maintain internal communication log in the Incident Report
  • Mandatory for all SEV-1 and SEV-2 incidents; due within 5 business days of resolution
  • Optional for SEV-3; not required for SEV-4

Postmortem must include:

  • Incident timeline (detection through resolution)
  • Root cause analysis (5 Whys or equivalent)
  • Impact assessment (users, data, services, duration)
  • What went well; what could be improved
  • Preventive actions with owners and due dates
  • Follow-up items tracked in GitHub issues; SEV-1/2 postmortems authored via Slack modal, stored as GitHub issue comments, and pulled by nightly evidence automation to evidence/logs/

Postmortems are blameless — focus on systems and processes, not individuals. Reviewed by CTO, CISO, and incident team.

Customers notified within 72 hours of a confirmed breach affecting their data. Notification must cover: what happened, what data was affected, what Meridian Seven is doing, what the customer should do, and a contact for questions.

  • GDPR: supervisory authority within 72 hours; affected individuals without undue delay
  • State breach notification laws: per applicable state requirements
  • Contractual obligations: per customer agreements and DPAs
  • All engineering staff must be familiar with this plan
  • Tabletop exercises conducted annually
  • Plan reviewed and updated annually or after any SEV-1/2 incident

Meridian Seven — Confidential