This is Part 1 of the 2026 Corporate Governance Series.
Artificial intelligence deployment and cybersecurity oversight are critical board-level responsibilities in 2026, regardless of company size or public status. Directors who cannot demonstrate thoughtful oversight of AI adoption and cyber risk increasingly face not only legal exposure but also valuation discounts, financing friction, and lost deals as investors, lenders, and strategic partners scrutinize governance practices. This article examines the governance frameworks that boards should implement to navigate these interconnected challenges effectively.
AI Governance: From Technology Decision to Fiduciary Duty
According to recent industry research, 66% of directors now use AI tools for board work, with 50% utilizing AI for meeting preparation. However, only 22% of organizations have implemented formal AI usage policies, creating significant governance gaps that expose companies to operational, legal, and reputational risks.
For public companies, AI deployment ranks as the second highest organizational priority and top area for capital investment in 2026. Shareholders increasingly expect boards to articulate clear AI strategies that balance innovation with risk management.
For private companies, the pressure comes from investors – present and future. Private equity and venture capital investors demand AI adoption roadmaps that demonstrate competitive positioning and operational efficiency gains. High-net-worth individuals (HNWIs) investing in portfolio companies increasingly condition investment terms on evidence of technological sophistication, including AI governance frameworks. Family offices conducting due diligence now routinely assess whether target companies have established AI oversight protocols.
Increasingly, sophisticated buyers and underwriters treat AI and cybersecurity governance as gating items in M&A and capital‑markets transactions, elevating these topics from “IT issues” to core fiduciary and deal‑readiness concerns.
Compliance Complexity Globally
The global regulatory landscape for AI is fragmented, with different requirements regionally and state-to-state within the U.S.
In the EU, the EU AI Act is entering the enforcement stage with compliance obligations and penalties up to €35 million for the most serious violations.[1]
In the U.S., while there remains no comprehensive federal framework,[2] the White House release a National Policy Framework for Artificial Intelligence[3] in March 2026 recommending a unified federal approach. Still, companies must navigate a patchwork of state laws and regulations.[4]
In the Asia-Pacific region, we see varying approaches from stringent oversight in the PRC to principles-based frameworks in Singapore and Australia.
In other emerging markets, such as Brazil and India, AI regulation is being layered into data protection laws.[5]
In the Middle East, AI deployment is anchored in data protection laws and seen as a strategic priority.[6]
And in Africa, the African Union adopted a Continental Artificial Intelligence Strategy in 2024 to guide member nations in their approaches to AI, which vary significantly country to country.[7]
Companies with multi-jurisdictional operations face the challenge of navigating inconsistent requirements for AI governance, risk assessment, transparency, and accountability. Boards must ensure compliance architectures can adapt to this regulatory patchwork while maintaining operational coherence.
Recent litigation and enforcement trends underscore that these are not theoretical risks. In Delaware, stockholder derivative complaints now routinely assert Caremark oversight claims against directors following major cybersecurity incidents; courts may consider cybersecurity a “mission critical” board responsibility.[8] In federal courts across the country, securities class actions are testing “AI-washing” theories, alleging that companies and their directors misled investors by overstating the role and performance of AI in their businesses. Regulators are also active: the SEC continues to treat cybersecurity and AI‑related disclosures as core investor-protection issues,[9] even as the current administration signals a more selective, harm-focused enforcement approach than its predecessor. [10] And state attorneys general, including in California and Texas, are pursuing consumer-protection investigations into both deficient data security and AI-driven privacy and bias issues.[11]
Board-Level AI Oversight Framework
Effective AI governance requires boards to address three critical dimensions:
1. Strategic Oversight
Evaluate AI adoption against company strategy and competitive positioning
Assess capital allocation for AI implementation and infrastructure
Review AI’s impact on business model evolution and revenue generation
Monitor competitive intelligence regarding AI deployment in the industry
Ensure the development of reliable, verifiable data collection, management, and governance models to support AI infrastructure
Build assessment of extraterritorial reach (e.g., EU operations or customers that may pull non-EU-entities into scope) into product and market entry decisions
2. Risk Management[12]
Implement systematic and regular bias testing for AI systems, particularly those affecting employment, credit, pricing, or customer access decisions
Establish access-control, privacy, and security protocols for AI training data and outputs
Assess third-party AI vendor exposure, including concentration risk and vendor stability
Develop incident response protocols for AI system failures or unintended consequences
3. Ethical and Legal Compliance
Adopt written AI usage policies covering acceptable use, prohibited applications, and human oversight requirements
Establish clear accountability structures, including designation of responsible executives and board oversight responsibilities
Implement transparency mechanisms for stakeholders affected by AI-driven decisions
Monitor evolving legal standards regarding AI liability, intellectual property, and regulatory compliance
Questions Every Board Should Ask
Does our company have a written AI policy governing employee and organizational use?
Who on the management team is accountable for AI governance, and which board members or committee provides oversight?
Are we conducting bias and discrimination testing for AI systems that affect stakeholder decisions?
What is our process for evaluating third-party AI vendors, and do we understand our liability exposure?
Have we audited our internal systems for AI functionality and applied our company policies? Annually?
How do we ensure human judgment remains central to critical decisions involving AI outputs?
Are we monitoring regulatory developments across all jurisdictions where we operate or have customers?
Special Considerations for Private Companies
Private company boards face unique AI governance challenges:
Investor Expectations: PE and VC investors increasingly include AI governance milestones in investment agreements. Boards should anticipate investor requests for:
Demonstration of competitive AI capabilities relative to industry peers
Clear metrics showing AI-driven efficiency gains or revenue enhancement
Regular reporting on AI adoption progress
Evidence of risk mitigation frameworks
Resource Constraints: Unlike public companies with large, dedicated compliance teams, private companies might be forced to adopt a triage approach to AI governance. Boards should:
Prioritize AI governance in areas with highest risk or strategic value
Adopt scalable AI policies that grow with the company
Identify and leverage third-party expertise through advisory relationships or fractional resources
Consider cost-effective AI governance technology platforms
Exit Planning: For companies pursuing eventual sale or IPO, demonstrating mature AI governance enhances valuation. Sophisticated investors and buyers conduct thorough due diligence on AI practices, data governance, and compliance frameworks. Early investment in AI governance infrastructure reduces transaction friction and supports premium valuations.
For most companies, AI governance and cybersecurity cannot be managed in isolation: the same data sets, vendors, and internal controls underpin both, and board oversight should reflect that interdependence.
Cybersecurity: The Escalating Board-Level Risk
The Threat Landscape in 2026
Cybercrime represents the single largest operational risk for many organizations in 2026. Key statistics underscore the urgency:
Cyber breach costs are projected to rise from $9.22 trillion in 2024 to $13.82 trillion by 2028[13]
Generative AI has increased organizational attack surfaces and geopolitical risk is reshaping cybersecurity strategy[14]
Directors rank cybersecurity and AI as two of the biggest risks of 2026[15]
The July 2024 CrowdStrike incident, where a faulty software update affected 8.5 million computers globally, illustrated how technology failures cascade across interconnected systems.[16] Insurance costs for that incident alone are estimated to be between $300 million and $1 billion.[17] The event demonstrates that even companies with sophisticated security postures face existential risk from technology deployment decisions.
Legal and Regulatory Developments
Directors are facing increased potential personal liability exposure related to cybersecurity oversight:
SEC Cybersecurity Rules (effective December 2023 for large accelerated filers): Public companies must disclose material cybersecurity incidents within four business days and provide annual disclosures about cybersecurity risk management, strategy, and governance.[18]
State-Level Regulations: New York’s SHIELD Act, California’s Consumer Privacy Act (CCPA), and similar state laws impose specific security requirements and breach notification obligations.
Shareholder Derivative Litigation: Courts increasingly reject motions to dismiss derivative claims alleging board failure to implement adequate cybersecurity oversight. The Caremark doctrine, requiring boards to implement reasonable information and reporting systems, explicitly encompasses cybersecurity.[19]
Private Company Exposure: While private companies may not face SEC disclosure requirements, they remain subject to:
State data breach notification laws
Contractual obligations to customers and vendors regarding data security
Professional liability for negligent security practices
Investor demands for cybersecurity transparency, particularly in investment agreements and side letters
Board Cybersecurity Oversight Best Practices
1. Establish Clear Governance Structure
Designate board committee responsibility for cybersecurity oversight (typically Audit or Risk Committee)
Ensure at least one director possesses cybersecurity expertise or access to expert advisors
Receive regular management reports on cybersecurity posture, incidents, and response activities
Review and approve cybersecurity budgets, ensuring adequate resource allocation
2. Implement Comprehensive Risk Assessment
Conduct regular third-party cybersecurity audits and penetration testing
Maintain current inventory of critical systems, data assets, and access controls
Assess supply chain and vendor cybersecurity risk, including critical software and infrastructure providers
Review cyber insurance coverage adequacy and exclusions
3. Develop and Test Incident Response
Adopt written incident response plan covering detection, containment, notification, and recovery
Conduct tabletop exercises simulating cyber incidents, including board participation
Establish communication protocols for notifying directors, investors, regulators, and affected parties
Pre-identify external resources (forensic investigators, legal counsel, crisis communications) for rapid mobilization
4. Monitor Compliance and Regulatory Obligations
Track evolving cybersecurity regulations across all relevant jurisdictions
Ensure policies address contractual security obligations in customer and vendor agreements
Review disclosure obligations and develop protocols for materiality assessments
Maintain documentation of board oversight activities to demonstrate reasonable oversight in potential litigation
Cybersecurity Questions for the Boardroom
When did we last conduct an independent cybersecurity assessment, and what were the key findings?
What are our three most significant cybersecurity vulnerabilities, and what is management doing to address them?
Have we tested our incident response plan in the last 12 months, and did the exercise include board participation?
What percentage of employees have completed cybersecurity training? How do we measure effectiveness? How often is training mandated?
How quickly can we detect and respond to a breach, and what metrics do we use to measure response capability?
What is our exposure to supply chain cyber risk, particularly from critical vendors?
Does our cyber insurance coverage adequately protect against our most significant risks?
Integration of AI and Cybersecurity Oversight
AI and cybersecurity governance are interconnected:
AI for Cyber Defense: Organizations deploy AI for threat detection, anomaly identification, and automated response, requiring boards to understand AI capabilities and limitations in security contexts
Data Governance Overlap: AI systems require vast data sets, creating data security challenges that boards must address holistically
AI as Attack Vector: Adversaries use AI to enhance phishing campaigns, identify vulnerabilities, and evade detection systems
Third-Party Risk: AI vendors introduce cybersecurity exposure requiring coordinated risk assessment
Boards should ensure these two critical oversight areas are coordinated, whether through joint committee responsibility, integrated reporting, or cross-functional management accountability.
Practical Implementation Roadmap
For boards seeking to strengthen AI and cybersecurity governance, we recommend the following phased approach:
Phase 1: Assessment
Conduct gap analysis comparing current practices against governance best practices
Inventory existing AI systems and cybersecurity controls
Review current policies and employee training regimes, committee charters, and oversight structures
Assess director expertise and identify capability gaps
Phase 2: Framework Development
Adopt or update AI usage policy and governance framework
Develop or enhance cybersecurity oversight protocols
Establish clear management accountability and board reporting structures
Identify necessary external resources (advisors, auditors, technology solutions)
Phase 3: Implementation
Deploy approved policies and governance structures
Initiate regular management reporting to designated board committees
Conduct director education on AI and cybersecurity topics
Implement risk monitoring and compliance tracking systems
Phase 4: Testing and Refinement
Conduct tabletop exercises for AI incident and cyber breach scenarios
Review effectiveness of governance frameworks and adjust as needed
Benchmark practices against industry peers and evolving standards
Document governance activities for legal and investor transparency
Conclusion
AI governance and cybersecurity oversight are no longer simply IT concerns. They are core board responsibilities that require active, informed engagement by every board member. Directors who fail to implement reasonable oversight systems face legal liability, investor pressure, and reputational damage. Conversely, boards that establish robust governance frameworks position their organizations for competitive advantage, operational resilience, and stakeholder confidence.
Accelerated technology adoption, regulatory fragmentation, and heightened threat landscapes demand that boards exercise active and strategic oversight. Whether your company is publicly traded or privately held, backed by institutional investors or family capital, the governance imperatives are clear: establish comprehensive AI and cybersecurity frameworks, ensure board-level expertise and engagement, and implement continuous monitoring and improvement processes.
About This Series
This article is the first in a four-part series examining critical corporate governance trends in 2026. Upcoming articles will address:
Part 2: Board Accountability and Competitive Advantage
Part 3: Regulatory Compliance and Risk Management
Part 4: Culture, Ethics, and Talent: The Human Side of Governance
- European Union, Regulation (EU) 2024/1689 (EU AI Act), Official Journal of the European Union (2024).↩︎
- On December 11, 2025, President Trump issued an Executive Order titled Ensuring a National Policy Framework for Artificial Intelligence. It declares that the U.S. national policy is “minimally burdensome” AI regulation to maintain AI dominance and targets the “patchwork” of state AI laws, directing the U.S. DOJ to challenge “onerous” state laws and the U.S. Department of Commerce to identify state laws that are inconsistent with federal policy.↩︎
- White House, Legislative Recommendations: National Policy Framework Artificial Intelligence (March 2026).↩︎
- California has taken the most aggressive approach: privacy centric, consumer-rights oriented AI regulation. California Consumer Privacy Act / California Privacy Rights Act (CCPA/CPRA), Cal. Civ. Code §§1798.100–1798.199.100. New York is building sectoral AI rules, especially around employment. E.g., NYC Local Law 144 of 2021 (Automated Employment Decision Tools Law). Texas, on the other hand, is taking a lighter touch and is more enforcement-oriented through consumer protection and deceptive trade practices laws. There are some specific AI use laws, for example in healthcare, but not an overarching AI law. E.g., Tex. SB 1188 88th Leg., Reg. Sess. (Tex. 2025) (regulating use of artificial intelligence in healthcare and electronic health records). And in Illinois, there are numerous AI-focused statutes, including biometrics and employment. E.g., Ill. Biometric Information Privacy Act, 740 ILCS 14; Ill. Artificial Intelligence Video Interview Act, 820 ILCS 42.↩︎
- In Brazil, lawmakers in 2023 approved a bill to create an EU-style, risk-based AI framework that sits on top of the country’s GDPR-style data protection law to create a national AI governance system. Projeto de Lei no. 2.338, de 2023 (regulating use of AI, risk categories, governance system SIA; Lei no. 13.709/2018, Lei Geral de Proteção de Dados Pessoais (LGPD). In India, AI is governed primarily through the Digital Personal Data Protection Act 2023, supplemented by IndAI Mission governance guidelines. See also Digital Personal Data Protection Act Rules (Nov. 13, 2025).↩︎
- For example, the UAE Charter for the Development and Use of AI and free zone guidance pushes DPIAs, bias testing, and human-in-the-loop controls for sensitive use cases. In Saudi Arabia, AI is governed through SDAIA’s AI ethics principles and the PDPL based data regime under the National Strategy for Data & AI, with a comprehensive AI law expected as the next step in the Kingdom’s Vision 2030 program.↩︎
- Available at https://au.int/en/documents/20240809/continental-artificial-intelligence-strategy.↩︎
- See, e.g., Jennifer Arlen, Caremark Liability for Materially Misleading Cybersecurity Disclosures: Solar Winds Reconsidered (Mar. 2025), available at https://corpgov.law.harvard.edu/2025/03/18/caremark-liability-for-materially-misleading-cybersecurity-disclosures-solar-winds-reconsidered/.↩︎
- SEC 2026 Enforcement Priorities, available here.↩︎
- SEC Litigation Release, SEC Dismisses Civil Enforcement Action Against SolarWinds and Chief Information Security Officer (Nov. 20, 2025), available here.↩︎
- Georgia Wells & Kim Mackeral, California Attorney General Investigating xAI Over Grok’s Deepfakes, Wall Street Journal (Jan. 15, 2026); Press Release: Attorney General Ken Paxton Announces Investigation into DeepSeek and Notifies the Chinese AI Company of its Violation of Texas State Law (Feb. 14, 2025) available here.↩︎
- For a deep dive, the U.S. National Institute of Standards and Technology (NIST) created a 48-page Artificial Intelligence Risk Management Framework (AI RMF 1.0), as directed by the National Artificial Intelligence Initiative Act of 2020, as a resource for organizations designing, developing, deploying, or using AI.↩︎
- Statista, Cybercrime Expected to Skyrocket in Coming Years (Feb. 22, 2024), available here; see also Statista, Cybersecurity - Worldwide statistics, available here.↩︎
- E.g., PWC global cyber study, Germany (2025) (67% increase in attack surfaces), available here; PWC 2026 Global Digital Trust Insights: C suite playbook and findings, New world, new rules: Cybersecurity in an era of uncertainty (Oct. 1, 2025) (60% increase in cyber risk investment due to geopolitical volatility), available here.↩︎
- See NACD, Boards Shift Their Focus to Execution (De. 10, 2025), available here.↩︎
- See BBC, CrowdStrike IT outage affected 8.5 million Windows devices (July 20, 2024) (Microsoft estimate), available here.↩︎
- Guy Carpenter, A Closer Look: Unveiling the Global Impact of CrowdStrike Event (Aug. 1, 2024) (“the likely insured loss is between $300 million and $1 billion”}, available here.↩︎
- Securities and Exchange Commission, “Cybersecurity Risk Management, Strategy, Governance, and Incident Disclosure,” 17 CFR Parts 229, 232, 239, 240, and 249 (2023).↩︎
- In re Caremark Int’l Inc. Derivative Litig., 698 A.2d 959 (Del. Ch. 1996); Marchand v. Barnhill, 212 A.3d 805 (Del. 2019).↩︎

