Blog / CCPA Audit

The 18 CCPA Cybersecurity Audit Components Explained in Plain English

Starting in 2026, certain California businesses must conduct annual cybersecurity audits under new CCPA regulations issued by the California Privacy Protection Agency (CPPA). These audits aren't optional, and they aren't vague -- the regulations specify 18 distinct components your security program must address.

If you've read the actual regulatory text, you know it's dense. This guide translates all 18 CCPA cybersecurity audit components into plain language. For each one, we'll explain what the regulation is really asking for and what it looks like in practice.

Quick Background: Who Needs a CCPA Cybersecurity Audit?

Not every business in California needs one. The audit requirement applies to businesses that meet specific thresholds:

  • Revenue from data: Businesses that derive 50% or more of their annual revenue from selling or sharing personal information
  • Large processors: Companies with $26.6M+ in annual gross revenue that also process the personal information of 250,000 or more consumers, households, or devices
  • Sensitive data handlers: Organizations that process the sensitive personal information of 50,000 or more consumers

If any of those apply to you, a cybersecurity audit is mandatory. The deadlines are staggered by revenue: April 1, 2028 for businesses over $100M, April 1, 2029 for $50-100M, and April 1, 2030 for those under $50M.

Even if you're not technically required yet, these 18 components represent what California considers "reasonable security" -- and they're worth knowing about regardless.

Digital padlock icon representing data security and encryption
The CCPA cybersecurity audit covers 18 specific components your security program must address

The 18 Audit Components

1. Multi-Factor Authentication (MFA)

What the regulation says: Your business must implement phishing-resistant multi-factor authentication for all employees, contractors, and service providers who access systems containing personal information.

What this means practically: A password alone isn't enough. Every person who touches your systems needs a second verification step -- an authenticator app, hardware security key, or biometric. The "phishing-resistant" part is important: SMS codes don't fully qualify because they can be intercepted. You'll want to use app-based authenticators (like Microsoft Authenticator or Google Authenticator) or hardware keys (like YubiKey) at minimum.

If you're running a business where people log in with just a username and password, this is one of the first things to fix.

2. Password Management

What the regulation says: All accounts must use strong, unique passwords or passphrases, and your organization must have a formal password policy.

What this means practically: You need a written policy that sets minimum password length (NIST currently recommends at least 15 characters), prohibits password reuse across systems, and requires unique passwords for every account. The simplest way to enforce this is to deploy an enterprise password manager (like 1Password Business or Bitwarden) across your organization and require its use. You should also be checking passwords against known breach databases.

3. Data Encryption at Rest

What the regulation says: Personal information stored on your systems must be encrypted.

What this means practically: Any database, file server, laptop, or backup that contains personal data needs encryption enabled. For most businesses, this means turning on full-disk encryption (BitLocker for Windows, FileVault for Mac), encrypting database columns that hold personal information, and ensuring your cloud storage uses encryption at rest (most major providers like AWS and Azure do this by default, but you need to verify and document it). The key detail auditors will look for is key management -- who controls the encryption keys, and are they stored separately from the data?

4. Data Encryption in Transit

What the regulation says: Personal information must be encrypted whenever it's being transmitted.

What this means practically: Every time personal data moves -- between your servers, to a user's browser, to a third-party API, in an email attachment -- it needs to be encrypted. At a minimum, your website and all internal applications should use HTTPS (TLS 1.2 or higher). Internal network traffic between servers that handles personal data should also be encrypted. VPNs or encrypted tunnels for remote access. Email encryption for sensitive data. File transfers via SFTP, not FTP. If your employees are emailing spreadsheets full of customer data as unencrypted attachments, that's a finding.

5. Access Controls

What the regulation says: Access to personal information must be restricted on a need-to-know basis.

What this means practically: Not everyone in your company should be able to see all customer data. You need role-based access controls (RBAC) that limit who can view, modify, or export personal information based on their job function. A marketing coordinator doesn't need access to raw customer databases. An IT admin maintaining servers doesn't need access to HR records. You'll need to document these access levels, review them regularly (at least quarterly), and promptly revoke access when someone changes roles or leaves the company.

6. Privileged Account Management

What the regulation says: Privileged accounts (admin-level access) must be limited and monitored.

What this means practically: Admin accounts are the keys to the kingdom, and auditors will scrutinize how you manage them. You should have as few privileged accounts as possible. Admin credentials should be different from everyday login credentials. Privileged access should be logged and monitored. Consider implementing a privileged access management (PAM) solution for larger organizations, or at minimum, maintain a documented list of who has admin access to what, and review it quarterly. Shared admin accounts (like a single "admin@company.com" that three people use) are a red flag.

Not Sure Where You Stand on These Requirements?

Take our free 5-minute assessment to see which of the 18 components you've already addressed and where the gaps are.

Take the Free Assessment

7. Physical Access Controls

What the regulation says: Physical access to systems that store or process personal information must be restricted.

What this means practically: If you have on-premises servers, they should be in a locked room with controlled access (key cards, access logs, or similar). Visitor access to sensitive areas should be logged. Workstations that access personal data should auto-lock after inactivity. If you're fully cloud-based, you still need to think about laptop security, clean desk policies, and physical security of any devices that can access your systems. For businesses using co-location facilities or data centers, you'll need documentation of the facility's physical security controls.

8. Data Inventory and Classification

What the regulation says: You must maintain an inventory of personal data flows, and classify your data based on sensitivity.

What this means practically: You need to know exactly what personal data you collect, where it lives, how it flows through your systems, and who has access to it. This means creating a data map that documents every system, database, and third-party service that touches personal information. You also need a classification scheme -- not all personal data is equal. A name and email address is different from a Social Security number or biometric data. Your security controls should be proportional to the sensitivity of the data.

This component is foundational. You can't protect data you don't know you have.

9. Secure System Configuration

What the regulation says: Systems and infrastructure must be securely configured following established benchmarks.

What this means practically: Every server, workstation, network device, and cloud service should be configured according to security best practices -- not left at factory defaults. This typically means following CIS Benchmarks or similar hardening guides. Default passwords changed. Unnecessary services disabled. Administrative interfaces not exposed to the internet. Cloud resources configured with least-privilege permissions. You should have documented configuration standards and a process for verifying that new systems are deployed according to those standards.

10. Patch Management

What the regulation says: Systems and software must be regularly patched and updated.

What this means practically: You need a documented process for identifying, testing, and deploying security patches across your environment. Critical and high-severity vulnerabilities should be patched within a defined timeframe -- typically 14 days for critical and 30 days for high. This applies to operating systems, applications, firmware, and third-party libraries. You should be tracking patch status across your environment and maintaining records that prove patches were applied on time. Automatic updates are fine for many systems, but you need to verify they're actually working.

11. Vulnerability Management

What the regulation says: You must conduct regular vulnerability scanning and penetration testing.

What this means practically: Vulnerability scanning means running automated tools (like Nessus, Qualys, or similar) against your systems on a regular schedule -- monthly for internal scans, at minimum quarterly for external-facing systems. Penetration testing goes further: hiring qualified security professionals to attempt to break into your systems the way a real attacker would. Most businesses should conduct penetration tests annually at minimum. The results of both activities need to be documented, and identified vulnerabilities need to be tracked through remediation.

12. Security Monitoring and Logging

What the regulation says: You must implement centralized logging, monitoring, and intrusion detection.

What this means practically: You need to be collecting security logs from your critical systems -- firewalls, servers, applications, authentication systems -- and sending them to a centralized location where they're monitored for suspicious activity. This could be a SIEM (Security Information and Event Management) system like Splunk, Microsoft Sentinel, or an outsourced SOC (Security Operations Center). The key requirements: logs must be retained for a defined period (typically 12 months minimum), they must be protected from tampering, and someone must actually be reviewing them. Collecting logs that nobody looks at won't pass an audit.

Getting Overwhelmed? You Don't Have to Build This From Scratch.

The CCPA Audit Readiness Kit includes policy templates, gap analysis tools, and implementation guides for all 18 components.

Get the Complete Kit - $497
Team gathered around a conference table during a business meeting
Security awareness training and incident response tabletops are key audit requirements

13. Malware Protection

What the regulation says: Anti-malware solutions must be deployed across all systems.

What this means practically: Every endpoint (laptop, desktop, server) that processes or can access personal information needs endpoint protection software -- what used to be called "antivirus" but now encompasses broader endpoint detection and response (EDR). Solutions like CrowdStrike, SentinelOne, Microsoft Defender for Endpoint, or Carbon Black are standard choices. The software needs to be centrally managed, kept up to date, and configured to alert on detections. You also need a process for investigating and responding to alerts, not just ignoring them.

14. Network Segmentation

What the regulation says: Networks must be segmented, and unnecessary ports and protocols must be restricted.

What this means practically: Your network shouldn't be flat -- meaning a compromised workstation in accounting shouldn't have direct access to your customer database servers. Use VLANs, firewalls, or software-defined networking to separate different zones: guest Wi-Fi from corporate, corporate from production, production from database tiers. Implement firewall rules that only allow necessary traffic between segments. Document your network architecture, including which ports and protocols are allowed between zones and why. For cloud environments, this translates to VPC segmentation, security groups, and network access control lists.

15. Security Training and Awareness

What the regulation says: All personnel must receive ongoing cybersecurity education.

What this means practically: You need a formal security awareness training program, not a one-time onboarding slide deck. Training should be conducted at least annually and cover topics relevant to your business: phishing recognition, data handling procedures, incident reporting, password hygiene, social engineering, and physical security. You should also run simulated phishing campaigns to test whether the training is working. All training must be tracked -- who completed it, when, and what was covered. New hires should receive training within their first 30 days. Employees in high-risk roles (IT, finance, HR) should receive additional specialized training.

16. Third-Party Risk Management

What the regulation says: You must oversee the security practices of your vendors, contractors, and service providers.

What this means practically: If you share personal data with any third party -- a SaaS provider, a payment processor, a marketing platform, a managed IT service -- you're responsible for ensuring they have adequate security controls. This means maintaining an inventory of all third parties with access to personal data, conducting security assessments before onboarding new vendors, requiring contractual security obligations, and reviewing vendor security posture periodically (at least annually for critical vendors). You should have a standardized vendor security questionnaire and a process for evaluating responses. SOC 2 reports and ISO 27001 certifications from vendors are helpful but not sufficient on their own -- you need to verify they cover the services you're actually using.

17. Data Retention and Disposal

What the regulation says: You must have defined retention schedules and securely dispose of personal information when it's no longer needed.

What this means practically: You can't keep personal data forever "just in case." You need a documented retention policy that specifies how long each category of personal data is kept and why. When data reaches the end of its retention period, it must be securely destroyed -- not just deleted, but overwritten or cryptographically erased for digital data, and shredded for physical records. This applies to backups too, which is where many businesses trip up. If your backup tapes contain personal data from five years ago that should have been deleted, that's a compliance issue. You should also have procedures for responding to consumer deletion requests under the CCPA, and those procedures should be documented and tested.

18. Incident Response Planning

What the regulation says: You must maintain documented incident response and business continuity plans.

What this means practically: You need a written incident response plan that covers how your organization will detect, contain, investigate, and recover from a security incident. The plan should define roles and responsibilities, escalation procedures, communication templates (internal and external, including notification to the CPPA and affected consumers), evidence preservation procedures, and post-incident review processes. Just having a plan on paper isn't enough -- you need to test it. Tabletop exercises at least annually, where your team walks through a realistic breach scenario, are the standard expectation. The plan should also include business continuity provisions: how you'll maintain operations if critical systems are compromised, and how you'll restore from backups.

What This Means for Your Business

If you're looking at this list and feeling behind, you're not alone. Most mid-size businesses have some of these components partially in place but lack the formal documentation, policies, and processes that an auditor will want to see.

The gap between "we sort of do this" and "we can demonstrate this to an auditor" is where most of the work lives. An auditor won't accept "our IT guy handles that" -- they'll want to see written policies, evidence of implementation, training records, access review logs, and incident response test results.

The good news: these 18 components aren't exotic. They align closely with established frameworks like NIST CSF and CIS Controls. If you've done any security work in the past few years, you've likely touched many of these areas already. The task now is formalizing, documenting, and filling in the gaps before your audit deadline arrives.

Where to Start

If you're approaching this from scratch, here's a practical sequence:

  1. Data inventory first. You can't protect what you don't know you have. Map your personal data flows before anything else.
  2. Gap assessment. Evaluate your current state against each of the 18 components. Be honest about what's missing.
  3. Prioritize by risk. Focus on the components that protect your most sensitive data and address your biggest vulnerabilities first.
  4. Document everything. Policies, procedures, configurations, training records. If it isn't written down, it doesn't exist in an audit.
  5. Test your controls. Vulnerability scans, penetration tests, phishing simulations, incident response tabletops. Auditors want evidence that your controls actually work.

Not Sure Where You Stand?

Start with our free 5-minute CCPA readiness assessment. You'll get an instant score across the 18 audit components and a clear picture of where to focus first.

Start the Free Assessment