Here’s What You Really Need to Know
- AI technologies are already being integrated into various aspects of plan management and operations.
- Fiduciaries have an obligation to protect participants’ personal and financial data, which includes adopting and maintaining robust cybersecurity practices.
- As a best practice, fiduciaries should ask prospective service providers whether—and how—they use AI-enabled tools to help participants optimize their investment decisions. Fiduciaries must ensure the data underlying AI tools is accurate, current, complete and secure — and they cannot rely blindly on opaque ‘black box’ models.
Let’s Dive In
In response to the fact that AI “holds extraordinary potential for both promise and peril,” the Administration of former President Joe Biden issued an Executive Order (EO) in October 2023. As part of the EO, on October 16, 2024, the DOL issued its latest non-binding guidance on AI use in the workplace, which consists of principles and best practices for AI developers and employers to protect workers and ensure responsible use of AI.[i]
While the guidance focused primarily on workplace activities such as recruitment and employment discrimination, it emphasized transparency in the use of AI, employer responsibility in the use of worker data, and a reminder that employers should not blindly rely on the technology, but rather be aware of and review both the input(s) and methodologies used in AI outputs. While that guidance was later withdrawn for review by the Trump Administration, its underlying principles remain relevant to prudent oversight.
The Current Landscape
AI technologies are already being integrated into various aspects of plan management and operations. AI-driven platforms can analyze individual participant data to deliver tailored communications that support retirement readiness. AI is also being used by some to streamline manual and repetitive tasks to not only reduce time and improve accuracy, but to ensure compliance with plan and operational requirements. These systems can process loans, hardship withdrawals and domestic relations orders. AI may also be used in some instances to facilitate some investment-related processes. That said, just like human beings can make mistakes or misread plan provisions, AI is not infallible.
Cybersecurity Concerns
With or without the use of AI, fiduciaries have an obligation to protect participants’ personal and financial data, and that includes adopting and maintaining robust cybersecurity practices.[ii] AI can help in this regard; identifying anomalies in account access and distribution activity and helping shield participants from unauthorized transactions. Indeed, at some point a failure to take advantage of these solutions might even suggest imprudence.
Of course, these days the bigger concern is likely the potential cybersecurity vulnerabilities that AI might introduce to your processes, or to exposing participant data. One plan advisor has already been sued for putting participant data into the public domain via ChatGPT.
Investment Management Insights
Prudent plan fiduciaries typically enlist the services of a professional advisor in the review and selection of the plan’s investment options. As AI has already begun finding its way into their models, fiduciaries should ask prospective advisors whether—and how—they use AI-enabled tools to assist in their review and recommendations. Additionally, plan fiduciaries should know how and if AI tools are being deployed to help participants optimize their investment decisions. It’s worth noting that this obligation is identical to that already applicable to the review and monitoring of human advice. Ultimately, a licensed advisor remains responsible for investment advice.
Provider Selection and Monitoring
Plan fiduciaries have an obligation to ensure that the fees and services rendered to the plan and participants are reasonable, including a responsibility to regularly monitor and review those services. That would include AI, of course, but AI can also be used to help monitor, review and recommend other services.
Whether monitoring AI services, or relying upon AI to evaluate other services, plan fiduciaries must understand how the AI models are built, the data they use and rely on, as well as how those results are validated, and whether cybersecurity and privacy controls are adequate.
Take Note!
A growing number of advisors have embraced the use of AI meeting assistants (like Otter.ai or Microsoft Copilot) to facilitate and speed up the production of plan committee meeting notes. That kind of automated notetaking ensures that minutes are prepared promptly and in a consistent format across meetings, which can support good governance practices.
However, ERISA fiduciary discussions often involve sensitive plan, participant, and vendor data, and many AI tools rely on cloud processing, which raises data privacy and cybersecurity concerns unless strong contractual and technical safeguards are in place.
Moreover, those can also mischaracterize some discussions and lose the nuance or context of a side conversation. AI-generated notes can misinterpret financial or legal terminology, omit nuance, or incorrectly summarize fiduciary deliberations — which could be problematic if minutes are later reviewed by auditors, regulators, or plaintiffs’ counsel. Committees must ensure they retain control and custody of official minutes — AI-generated drafts should be reviewed, corrected, and formally approved like any other record. If AI meeting tools are used, plan counsel should review vendor terms for data handling.
Remember that AI-generated drafts may assist fiduciaries but do not replace the committee’s duty to maintain accurate, contemporaneous records under ERISA.
Transparency is Essential for Accountability
As noted above, the DOL’s guidance emphasized the importance of transparency, and cautioned employers regarding a blind reliance on “black box” outputs. Fiduciaries should understand how AI-based decisions are made, as ignorance of, and reliance upon, outputs could run afoul of ERISA’s prudence standards. As AI is constantly evolving, it is important that monitoring and documentation of that effort be completed regularly to ensure continued compliance.
“Garbage in, garbage out” is a tried-and-true maxim that applies in full force to AI. Fiduciaries should ensure the input data and resources driving AI tools is accurate, current, complete and secure. Bottom line: fiduciaries should be able to explain, at least at a high level, how AI recommendations were generated and why reliance on them was prudent.
Action Items for Plan Sponsors
While AI presents significant opportunities for efficiency and personalization, AI should complement, not replace, traditional methods of plan management and fiduciary judgment. As such, plan fiduciaries should evaluate and monitor AI through the same lens of ERISA’s fiduciary standards that apply to all plan services and providers.
Recommended action items include:
- Integrate AI risk management protocols into the plan’s overall governance strategy; consider using the questions for consideration to help the committee get started.
- If using AI to assist with committee meetings, consider using AI only for draft transcription of committee meeting notes, not as the final record of meeting minutes. Delete AI drafts (and recordings) afterward and most of all, ensure all parties (advisor, plan sponsor, other service providers present) agree to these practices.
- Evaluate and document how AI tools impact investment selection, recordkeeping and participant advice.
- Review and revise service provider contracts as needed to include AI-specific clauses regarding use, data elements, and ensuring compliance with relevant privacy standards.
- Conduct initial and periodic due diligence on AI vendors (and those that use those tools), involving technical experts as needed.
- Train plan committee members on the benefits, limitations and risks of AI in plan operations.
- Document AI-related discussions in committee minutes (including rationale for adoption or rejection).
[i] U.S. Department of Labor. “Artificial Intelligence and Worker Well-being: Principles for Developers and Employers.” Accessed August 13, 2024. In Web Archive, archived August 13, 2024. https://web.archive.org/web/20240813173652/https:/www.dol.gov/general/ai-principles
[ii] See Cybersecurity Program Best Practices. Available at https://www.dol.gov/agencies/ebsa/key-topics/retirement-benefits/cybersecurity/best-practices. Also Tips for Hiring a Service Provider with Strong Cybersecurity Practices. Available at https://www.dol.gov/agencies/ebsa/key-topics/retirement-benefits/cybersecurity/best-practices.

