CCPA Cybersecurity Audit: The Infrastructure Checklist Your Engineering Team Actually Needs
California's new cybersecurity audit requirement takes effect January 1, 2026, with phased certification deadlines starting April 2028. Every law firm in the country has published a client alert. Not one of them tells you what to actually deploy. This is the engineering checklist.
The timeline your engineering team needs to know
The CCPA cybersecurity audit isn't a single deadline. It's a phased rollout based on company revenue, and the CPPA designed it that way so enforcement has teeth from day one.
- April 2028 — Businesses with $100M+ annual revenue must complete their first certified cybersecurity audit and submit it to the CPPA.
- April 2029 — Businesses with $50M-$100M annual revenue.
- April 2030 — All other businesses that meet CCPA thresholds (process 100K+ consumers' data, or derive 50%+ revenue from selling/sharing personal information).
April 2028 sounds far away. It isn't. The audit requires a certified third-party assessment of your security controls. Auditors need evidence. Evidence requires controls that have been running long enough to produce logs, access reviews, and configuration history. If you start deploying controls six months before your deadline, you'll have six months of evidence. Auditors want twelve.
That means the real deadline for $100M+ companies is roughly Q1 2027 — when controls need to be live and generating evidence.
What the CCPA cybersecurity audit actually covers
The CPPA's regulatory text reads like legal prose. Here's what it translates to in infrastructure terms. The audit evaluates whether your organization has implemented "reasonable security procedures and practices" across these domains:
1. Authentication controls
The audit checks whether you enforce strong authentication across all systems that access personal information. This isn't "do you have a login page" — it's whether MFA is enforced, passwords meet complexity requirements, sessions expire, and failed attempts trigger lockouts.
- MFA enforcement — required on all accounts with access to PI, including service accounts where feasible. SMS-only MFA is a finding.
- Password policy — minimum 12 characters, complexity requirements, no reuse of last 12 passwords, forced rotation every 90 days for privileged accounts.
- Session management — idle timeout (15 minutes for privileged sessions, 30 for standard), absolute timeout after 8 hours, no persistent sessions for admin access.
- Account lockout — lockout after 5 failed attempts, minimum 15-minute lockout duration, alert on repeated lockout events.
2. Encryption requirements
Every data store and every data path that touches personal information needs encryption. The audit checks both at-rest and in-transit encryption, plus key management practices.
- At rest — AES-256 for databases, object storage, backups, and any local storage. Not just "the disk is encrypted" — field-level or column-level encryption for sensitive PI categories (SSN, financial, health data).
- In transit — TLS 1.2+ on all external connections. TLS 1.3 preferred. No self-signed certs in production. Internal service-to-service communication encrypted (mTLS or service mesh).
- Key management — keys stored in a dedicated KMS (not in application code, not in environment variables). Key rotation on a defined schedule. Separation of duties between key administrators and key users.
3. Access controls
The audit evaluates whether access to personal information follows least-privilege principles and whether you can demonstrate who accessed what, when.
- RBAC/ABAC — role-based or attribute-based access control with documented role definitions. No shared accounts. No standing admin access to production databases.
- Least privilege — users get the minimum permissions required for their role. Quarterly access reviews with documented approvals and removals.
- Privileged access management — just-in-time access for production systems. Break-glass procedures documented and audited. All privileged sessions recorded.
- Audit logging — every access to PI-containing systems logged with user identity, timestamp, action, and data accessed. Logs immutable and retained for 12+ months.
4. Network security
- Segmentation — PI-processing systems in dedicated network segments. No flat network where a compromised web server can query the PII database directly.
- Firewall rules — default-deny ingress and egress. Documented justification for every allow rule. Regular rule review (quarterly minimum).
- Remote access — VPN or zero-trust network access for all remote connections to PI systems. No direct SSH/RDP exposure to the internet.
5. Incident response
- Detection — SIEM or equivalent with alerting on anomalous access patterns, privilege escalation, bulk data access, and exfiltration indicators.
- Logging — centralized log aggregation with tamper-proof storage. Logs from all PI-touching systems forwarded and correlated.
- Response plan — documented incident response plan that specifically addresses PI breaches, including the 72-hour CCPA notification requirement.
- Testing — tabletop exercises or red team exercises at least annually. Results documented with remediation tracking.
6. Data inventory
- What PI you hold — categorized inventory of personal information types (identifiers, financial, biometric, geolocation, browsing history, etc.).
- Where it's stored — every database, cache, log, backup, and third-party system that holds PI, mapped and documented.
- How it flows — data flow diagrams showing PI movement from collection through processing, storage, sharing, and deletion.
- Retention schedules — defined retention periods per data category with automated deletion when retention expires.
The SOC 2 overlap: what carries over and what doesn't
If your company already holds a SOC 2 Type II report, you're not starting from zero. Roughly 70% of the CCPA cybersecurity audit requirements map directly to SOC 2 Trust Services Criteria. But the remaining 30% is where companies get caught.
What carries over from SOC 2
- Access controls (CC6.1-CC6.8) — RBAC, least privilege, access provisioning/deprovisioning, and access reviews map cleanly to CCPA requirements.
- Logical and physical access (CC6.1) — authentication controls, MFA, password policies are typically well-documented in SOC 2.
- System operations (CC7.1-CC7.5) — monitoring, incident detection, and incident response procedures carry over directly.
- Change management (CC8.1) — infrastructure change controls, code review, and deployment procedures are already evidenced.
- Encryption controls (CC6.1, CC6.7) — at-rest and in-transit encryption, key management practices.
- Logging and monitoring (CC7.2) — centralized logging, alerting, and log retention.
What's net-new for CCPA
This is where SOC 2 stops and the CCPA cybersecurity audit begins. These controls are privacy-specific, and most SOC 2 programs don't cover them:
- Deletion verification — SOC 2 covers data retention broadly, but CCPA requires you to prove that when a consumer requests deletion, the data is actually deleted from every system — primary databases, caches, backups, logs, third-party integrations. You need automated verification with audit trails showing deletion completion across all data stores.
- Consent enforcement evidence — your SOC 2 doesn't cover whether opt-out signals are honored in real time. CCPA requires evidence that consent preferences (including GPC signals) are enforced at the point of data processing, not just recorded. This means logging every consent check, every signal received, and every enforcement action.
- Privacy-specific access logging — SOC 2 logs who accessed the system. CCPA wants to know who accessed which consumer's personal information and why. This is a materially different level of granularity — per-consumer, per-record access logging with purpose documentation.
- Data inventory and flow mapping — SOC 2 has system descriptions. CCPA requires a granular data inventory: every category of PI, every storage location, every third party that receives it, every data flow, and every retention schedule. This is a living document, not a point-in-time description.
- Third-party risk assessment for PI processors — SOC 2 covers vendor management generally. CCPA requires specific assessment of every vendor that processes personal information, including contractual requirements, technical controls verification, and ongoing monitoring.
The engineering checklist
Here's what "compliant" looks like in infrastructure terms. Each item includes what to deploy, not what to write in a policy document.
MFA enforcement
Don't rely on users enabling MFA voluntarily. Enforce it at the identity provider level. Here's an AWS IAM policy that denies all actions unless MFA is present:
// IAM policy: deny all actions without MFA { "Version": "2012-10-17", "Statement": [ { "Sid": "DenyAllWithoutMFA", "Effect": "Deny", "NotAction": [ "iam:CreateVirtualMFADevice", "iam:EnableMFADevice", "iam:ListMFADevices", "iam:ResyncMFADevice", "sts:GetSessionToken" ], "Resource": "*", "Condition": { "BoolIfExists": { "aws:MultiFactorAuthPresent": false } } } ] }
Attach this to every IAM group. Users can still set up MFA, but they can't do anything else until they do.
Encryption at rest with KMS
Every data store that holds PI needs encryption with a customer-managed key. Here's the IaC for an AWS KMS key with automatic rotation and a restrictive key policy:
resource "aws_kms_key" "pi_encryption" { description = "Encryption key for personal information data stores" deletion_window_in_days = 30 enable_key_rotation = true rotation_period_in_days = 365 policy = jsonencode({ Version = "2012-10-17" Statement = [ { Sid = "KeyAdminAccess" Effect = "Allow" Principal = { AWS = "arn:aws:iam::role/KeyAdministrator" } Action = ["kms:Create*", "kms:Describe*", "kms:Enable*", "kms:List*", "kms:Put*", "kms:Update*", "kms:Revoke*", "kms:Disable*", "kms:Get*", "kms:Delete*", "kms:ScheduleKeyDeletion", "kms:CancelKeyDeletion"] Resource = "*" }, { Sid = "KeyUsageAccess" Effect = "Allow" Principal = { AWS = "arn:aws:iam::role/ApplicationService" } Action = ["kms:Decrypt", "kms:DescribeKey", "kms:Encrypt", "kms:GenerateDataKey*"] Resource = "*" } ] }) } resource "aws_kms_alias" "pi_encryption" { name = "alias/pi-data-encryption" target_key_id = aws_kms_key.pi_encryption.key_id }
import { Stack, Duration, RemovalPolicy } from "aws-cdk-lib"; import * as kms from "aws-cdk-lib/aws-kms"; import * as iam from "aws-cdk-lib/aws-iam"; const piEncryptionKey = new kms.Key(this, "PiEncryptionKey", { description: "Encryption key for personal information data stores", enableKeyRotation: true, rotationPeriod: Duration.days(365), pendingWindow: Duration.days(30), removalPolicy: RemovalPolicy.RETAIN, }); // Key administrators — manage but cannot use the key const keyAdminRole = iam.Role.fromRoleName(this, "KeyAdmin", "KeyAdministrator"); piEncryptionKey.grantAdmin(keyAdminRole); // Application services — encrypt and decrypt only const appServiceRole = iam.Role.fromRoleName(this, "AppService", "ApplicationService"); piEncryptionKey.grantEncryptDecrypt(appServiceRole); piEncryptionKey.addAlias("alias/pi-data-encryption");
import * as aws from "@pulumi/aws"; const piEncryptionKey = new aws.kms.Key("piEncryption", { description: "Encryption key for personal information data stores", deletionWindowInDays: 30, enableKeyRotation: true, rotationPeriodInDays: 365, policy: JSON.stringify({ Version: "2012-10-17", Statement: [ { Sid: "KeyAdminAccess", Effect: "Allow", Principal: { AWS: "arn:aws:iam::role/KeyAdministrator" }, Action: ["kms:Create*", "kms:Describe*", "kms:Enable*", "kms:List*", "kms:Put*", "kms:Update*", "kms:Revoke*", "kms:Disable*", "kms:Get*", "kms:Delete*", "kms:ScheduleKeyDeletion", "kms:CancelKeyDeletion"], Resource: "*", }, { Sid: "KeyUsageAccess", Effect: "Allow", Principal: { AWS: "arn:aws:iam::role/ApplicationService" }, Action: ["kms:Decrypt", "kms:DescribeKey", "kms:Encrypt", "kms:GenerateDataKey*"], Resource: "*", }, ], }), }); const piEncryptionAlias = new aws.kms.Alias("piEncryptionAlias", { name: "alias/pi-data-encryption", targetKeyId: piEncryptionKey.keyId, });
Key administrators can manage the key. Application services can only encrypt and decrypt. Nobody has both permissions. This separation of duties is specifically what auditors look for.
IAM least-privilege configuration
Standing admin access to production databases is an automatic finding. Deploy just-in-time access with automatic expiration:
resource "aws_iam_policy" "pi_database_read" { name = "pi-database-read-only" description = "Read-only access to PI databases — JIT only" policy = jsonencode({ Version = "2012-10-17" Statement = [ { Effect = "Allow" Action = [ "rds:DescribeDBInstances", "rds-db:connect" ] Resource = "arn:aws:rds-db:*:*:dbuser:*/readonly_user" Condition = { "DateLessThan" = { "aws:CurrentTime" = "${timestamp() + 4h}" } "Bool" = { "aws:MultiFactorAuthPresent" = "true" } } } ] }) }
import * as iam from "aws-cdk-lib/aws-iam"; const piDatabaseReadPolicy = new iam.ManagedPolicy(this, "PiDatabaseRead", { managedPolicyName: "pi-database-read-only", description: "Read-only access to PI databases — JIT only", statements: [ new iam.PolicyStatement({ effect: iam.Effect.ALLOW, actions: [ "rds:DescribeDBInstances", "rds-db:connect", ], resources: ["arn:aws:rds-db:*:*:dbuser:*/readonly_user"], conditions: { "DateLessThan": { "aws:CurrentTime": new Date( Date.now() + 4 * 60 * 60 * 1000 ).toISOString(), }, "Bool": { "aws:MultiFactorAuthPresent": "true", }, }, }), ], });
import * as aws from "@pulumi/aws"; const piDatabaseReadPolicy = new aws.iam.Policy("piDatabaseRead", { name: "pi-database-read-only", description: "Read-only access to PI databases — JIT only", policy: JSON.stringify({ Version: "2012-10-17", Statement: [ { Effect: "Allow", Action: [ "rds:DescribeDBInstances", "rds-db:connect", ], Resource: "arn:aws:rds-db:*:*:dbuser:*/readonly_user", Condition: { "DateLessThan": { "aws:CurrentTime": new Date( Date.now() + 4 * 60 * 60 * 1000 ).toISOString(), }, "Bool": { "aws:MultiFactorAuthPresent": "true", }, }, }, ], }), });
Access is read-only, time-boxed to 4 hours, and requires MFA. Every grant creates an audit trail entry. When the session expires, access is gone — no cleanup required.
CloudTrail audit logging
The audit requires comprehensive logging that can't be tampered with. Here's the IaC for a CloudTrail configuration that logs all management and data events for PI-touching resources, with integrity validation:
resource "aws_cloudtrail" "ccpa_audit_trail" { name = "ccpa-compliance-trail" s3_bucket_name = aws_s3_bucket.audit_logs.id include_global_service_events = true is_multi_region_trail = true enable_log_file_validation = true // tamper-proof digest files event_selector { read_write_type = "All" include_management_events = true data_resource { type = "AWS::S3::Object" values = ["arn:aws:s3:::pi-data-bucket/"] } data_resource { type = "AWS::RDS::DBCluster" values = ["arn:aws:rds:*:*:cluster:pi-*"] } } cloud_watch_logs_group_arn = "${aws_cloudwatch_log_group.audit.arn}:*" cloud_watch_logs_role_arn = aws_iam_role.cloudtrail_cloudwatch.arn } // Log retention: 12 months minimum for CCPA audit evidence resource "aws_cloudwatch_log_group" "audit" { name = "/ccpa/audit-trail" retention_in_days = 365 } // S3 bucket with object lock — logs cannot be deleted or modified resource "aws_s3_bucket" "audit_logs" { bucket = "company-ccpa-audit-logs" object_lock_enabled = true }
import { RemovalPolicy, Duration } from "aws-cdk-lib"; import * as cloudtrail from "aws-cdk-lib/aws-cloudtrail"; import * as logs from "aws-cdk-lib/aws-logs"; import * as s3 from "aws-cdk-lib/aws-s3"; // S3 bucket with object lock — logs cannot be deleted or modified const auditBucket = new s3.Bucket(this, "AuditLogs", { bucketName: "company-ccpa-audit-logs", objectLockEnabled: true, removalPolicy: RemovalPolicy.RETAIN, }); // Log retention: 12 months minimum for CCPA audit evidence const auditLogGroup = new logs.LogGroup(this, "AuditLogGroup", { logGroupName: "/ccpa/audit-trail", retention: logs.RetentionDays.ONE_YEAR, }); const trail = new cloudtrail.Trail(this, "CcpaAuditTrail", { trailName: "ccpa-compliance-trail", bucket: auditBucket, includeGlobalServiceEvents: true, isMultiRegionTrail: true, enableFileValidation: true, // tamper-proof digest files sendToCloudWatchLogs: true, cloudWatchLogGroup: auditLogGroup, managementEvents: cloudtrail.ReadWriteType.ALL, }); // Log data events for PI-touching S3 buckets and RDS clusters trail.addS3EventSelector([{ bucket: s3.Bucket.fromBucketName(this, "PiBucket", "pi-data-bucket") }], { readWriteType: cloudtrail.ReadWriteType.ALL, includeManagementEvents: true, });
import * as aws from "@pulumi/aws"; // S3 bucket with object lock — logs cannot be deleted or modified const auditBucket = new aws.s3.BucketV2("auditLogs", { bucket: "company-ccpa-audit-logs", objectLockEnabled: true, }); // Log retention: 12 months minimum for CCPA audit evidence const auditLogGroup = new aws.cloudwatch.LogGroup("auditLogGroup", { name: "/ccpa/audit-trail", retentionInDays: 365, }); const ccpaAuditTrail = new aws.cloudtrail.Trail("ccpaAuditTrail", { name: "ccpa-compliance-trail", s3BucketName: auditBucket.id, includeGlobalServiceEvents: true, isMultiRegionTrail: true, enableLogFileValidation: true, // tamper-proof digest files eventSelectors: [{ readWriteType: "All", includeManagementEvents: true, dataResources: [ { type: "AWS::S3::Object", values: ["arn:aws:s3:::pi-data-bucket/"], }, { type: "AWS::RDS::DBCluster", values: ["arn:aws:rds:*:*:cluster:pi-*"], }, ], }], cloudWatchLogsGroupArn: pulumi.interpolate`${auditLogGroup.arn}:*`, cloudWatchLogsRoleArn: cloudtrailCloudwatchRole.arn, });
Log file validation means every log file gets a cryptographic digest. If anyone modifies or deletes a log entry, the digest chain breaks. Auditors can verify the integrity of your entire audit trail with a single CLI command.
Evidence collection: how IaC generates audit artifacts automatically
The cybersecurity audit doesn't just check whether controls exist. It requires evidence that controls are operating effectively over time. This is where most engineering teams underestimate the effort — controls are the easy part, evidence is the hard part.
If your infrastructure is defined as code, you're already generating most of the evidence you need. You just need to capture and preserve it.
Infrastructure-as-code snapshots
Every infrastructure plan and apply is an auditable record of your infrastructure state. Configure your CI/CD pipeline to archive these automatically:
# .github/workflows/infra-audit-evidence.yml name: Infrastructure Audit Evidence on: push: paths: - 'infrastructure/**' jobs: capture-evidence: runs-on: ubuntu-latest steps: - name: Capture terraform plan run: | terraform plan -out=tfplan terraform show -json tfplan > plan-evidence.json - name: Capture current state run: | terraform show -json > state-evidence.json - name: Archive to audit bucket run: | TIMESTAMP=$(date +%Y-%m-%dT%H:%M:%S) aws s3 cp plan-evidence.json \ s3://audit-evidence/infra/${TIMESTAMP}-plan.json aws s3 cp state-evidence.json \ s3://audit-evidence/infra/${TIMESTAMP}-state.json
Every infrastructure change is now timestamped, immutable (object lock on the bucket), and queryable. When the auditor asks "show me that encryption was enabled on this database since deployment," you point them at the state snapshots.
Deployment logs as change evidence
Your CI/CD pipeline already knows who deployed what, when, and whether it passed review. Preserve this metadata:
- Git commit history — who authored the change, who approved the PR, when it was merged. This is your change authorization evidence.
- Pipeline execution logs — which tests ran, which security scans passed, what was deployed. This is your change validation evidence.
- Deployment manifests — the exact container images, configurations, and secrets references used in each deployment. This is your configuration baseline evidence.
Automated access reviews
Quarterly access reviews are a requirement. Don't do them manually. Script the extraction and comparison:
# Quarterly access review — automated evidence generation import boto3 import json from datetime import datetime def generate_access_review(): iam = boto3.client('iam') review = { "review_date": datetime.now().isoformat(), "users": [] } for user in iam.list_users()['Users']: policies = iam.list_attached_user_policies( UserName=user['UserName'] ) groups = iam.list_groups_for_user( UserName=user['UserName'] ) mfa = iam.list_mfa_devices( UserName=user['UserName'] ) last_used = iam.get_user( UserName=user['UserName'] )['User'].get('PasswordLastUsed') review["users"].append({ "username": user['UserName'], "mfa_enabled": len(mfa['MFADevices']) > 0, "policies": [p['PolicyName'] for p in policies['AttachedPolicies']], "groups": [g['GroupName'] for g in groups['Groups']], "last_active": str(last_used), "review_status": "pending" }) return review
Run this quarterly. The output is a structured JSON document that shows every user, their permissions, MFA status, and last activity. Flag inactive accounts (no login in 90+ days) for deactivation. Flag over-privileged accounts for remediation. The review itself becomes auditable evidence.
The gap most teams miss
You can have every control on this checklist deployed and still fail the audit. The reason: the CCPA cybersecurity audit isn't just about security controls. It's about security controls as they apply to personal information specifically.
That means you need to know exactly which systems hold PI, which data flows involve PI, and which access patterns touch PI. Without a current, accurate data inventory, your security controls are generic — and the auditor has no way to verify they're actually protecting what the CCPA requires them to protect.
The data inventory is the foundation. Start there. Everything else — encryption, access controls, logging, evidence collection — depends on knowing what you're protecting and where it lives.
// Free CCPA gap assessment — we'll map your current security controls against the CCPA cybersecurity audit requirements and identify exactly what's missing. 60 minutes, 48-hour gap report.