GSA Releases Draft AI Clause Ahead of Upcoming MAS Refresh
Following the government’s very public breakup with Anthropic, GSA released draft artificial intelligence (“AI”) terms and conditions that it intends to include as part of its upcoming Multiple Award Schedule (“MAS”) refresh, currently planned for late March or April 2026. MAS contractors should take note: GSA’s draft clause, GSAR 552.239-7001, Basic Safeguarding of Artificial Intelligence Systems (Feb 2026) (GSAR Deviation) represents the most comprehensive attempt to define contractor obligations when deploying AI capabilities in the performance of a federal government contract to date.
Background: Federal Procurement of AI in Focus
GSA’s release of the clause comes at a time of heightened public attention on the federal government’s procurement of AI tools. Recently, all federal agencies were ordered to cease use of Anthropic’s technology following the company’s dispute with the Department of Defense over use of the company’s large language model (“LLM”). The dispute reportedly stemmed from the Department’s ultimatum that the company allow the Pentagon to use its AI model for “all lawful purposes” without restriction, while Anthropic requested exceptions that would prohibit certain uses of its system—even where those uses were otherwise lawful. The disagreement devolved into DoD designating Anthropic as a “supply chain risk” and directing agencies to discontinue use of the technology within six months.
The episode underscores a growing tension between commercial AI providers—many of which impose use restrictions through commercial terms—and the federal government’s expectation that technologies procured under federal contracts be available for “any lawful government purpose.”
Against this backdrop, the GSAR Deviation imposes a series of obligations for Schedule contractors providing or using AI systems during contract performance.
The Draft Clause
Contractor Responsibility for Service Providers
The draft clause applies to all MAS contractors providing or using AI systems in contract performance—regardless of whether the AI is provided directly to the government, embedded in contractor workflows, operated by subcontractors, or licensed by third party “Service Providers.”[1] The clause defines a “Service Provider” as “an entity that directly or indirectly provides, operates, or licenses an AI system but is not a party to the contract.”[2] Service Providers “may or may not be subcontractors.”
Further, the clause purports to make the contractor responsible for ensuring Service Providers’ compliance with the clause—even where those Service Providers are not “subcontractors” and where the contractor lacks control over the underlying model or infrastructure.[3]
Order of Precedence: GSA’s Clause Overrides AI Providers’ Standard Terms and Conditions
Critically, the clause includes an order of precedence provision designed to override conflicting commercial terms and conditions—including standard terms and conditions imposed by AI service providers (such as through an end-user license agreement (“EULA”) or “clickwrap” agreement).[4] This is significant. Most AI vendors impose “standard” terms and conditions that, by their very nature, cannot be negotiated. Where the GSA clause conflicts with an AI provider’s standard terms, the GSA clause takes precedence.
The order of precedence provision, combined with the requirement that the contractor be responsible “for the Service Provider’s adherence to” the clause, means that GSA is essentially requiring MAS contractors to take responsibility for ensuring upstream AI providers (such as OpenAI, Google, Microsoft, etc.) comply with the clause—or else cease using the AI systems during contract performance.
Government Ownership: “Government Data” and “Custom Developments”
The government asserts “full ownership” over all “Government Data” and “Custom Developments.”[5]
- Government Data includes “Data Inputs” (content submitted to the AI system by or for the government) as well as “Data Outputs” (content generated by the AI system in performance of the contract).
- “Custom Developments” is defined to include modifications, customizations, configurations, or enhancements to AI systems or associated implementations or workflows and any related work product or deliverables developed for the government under the contract, including any “models as a result of model training or fine-tuning.”
The clause expands on the government’s ownership of all “Custom Developments” by requiring the contractor / Service Provider to:
- Dedicate the Custom Developments to the government’s exclusive use;
- Treat Custom Developments as the government’s confidential information; and
- Not use, reproduce, or derive benefit from Custom Developments without written authorization from the Contracting Officer.[6]
While the contractor or Service Provider retains ownership of the underlying AI system and base models,[7] the government is asserting ownership over any fine‑tuned derivatives, configurations, and workflows built specifically for it under the contract (e.g., RAG indexes, prompt libraries, and integration scripts).[8]
Government License: Use of the AI System for “Any Lawful Government Purpose”
The draft clause includes an expansive license grant reflective of the Anthropic dispute: The contractor must grant the government an “irrevocable, royalty-free, non-exclusive license to use the AI System for the duration of [the] contract for any lawful Government purpose.”[9]
This license includes the right to
- Operate and access the AI system;
- Input “Data Inputs” and receive “Data Outputs”;
- Allow authorized government personnel and contractors to use the AI system; and
- Integrate the AI system with government systems “as necessary for any lawful Government purpose.”[10]
This alone is broad. The provision regarding Data Inputs and Outputs, however, takes it even further: “The AI System must not refuse to produce data outputs or conduct analyses based on the Contractor’s or Service Provider’s discretionary policies.”[11] Said another way, public “safety,” content moderation, or model guardrails cannot block lawful government tasks. Retraining the model or altering model weights is not required to achieve compliance.
This provision, clearly written in response to the Pentagon’s dispute with Anthropic, aims to give the government full access, operability, and integration rights—and seeks to ensure that vendor-imposed safety or content moderation rules cannot override lawful federal use cases.
Disclose Use of All AI Systems and Use Only “American AI Systems”
The draft clause requires contractors to disclose to the contracting officer no later than 30 days after award all AI systems used in contract performance and whether such systems have been “modified or configured to comply with any non-U.S. federal government or commercial compliance or regulatory framework[.]”[12]
This means that contractors must disclose if an AI system has been configured to comply with foreign regulatory frameworks, such as the European Union’s Artificial Intelligence Act, or any U.S. state laws because such modifications / reconfigurations can affect how the AI system behaves when deployed for U.S. government use.
Consistent with the Office of Management and Budget’s (“OMB”) April 2025 memorandum on Driving Efficient Acquisition of Artificial Intelligence in Government, the draft clause requires the use of only “American AI Systems.”[13] The draft clause prohibits the use of “foreign AI systems” in contract performance, including “any AI components manufactured, developed, or controlled by” non-U.S. entities.[14] That means certain foreign models like DeepSeek and Qwen might be out for federal contractors.
The draft clause does not clarify when an AI system is “developed and produced” in the United States, nor what constitutes “manufacture, development, or control” by non-U.S. entities.
Government Data Handling and Processing
Unless expressly directed otherwise, the draft clause requires the contractor and Service Provider to:
- Implement reasonable safeguards to protect Government Data;
- Implement “eyes off” Data handling procedures that restrict human review of Government Data “except as strictly necessary”;
- Provide tools that enable the Government to maintain detailed records of all processing activities involving Government Data;
- Implement and maintain appropriate technical and organizational measures to ensure that Government Data is logically segregated; and
- Upon contract completion / termination, securely delete all Government Data and any Custom Developments and certify deletion to the contracting officer in writing.[15]
Conclusion
The proposed clause, though still in draft form, marks a significant departure away from the government’s standard commercial buying practices and a decisive step towards “government-first” AI terms. Comments on the draft clause are due to GSA by Friday, March 20.
MAS contractors: time to start inventorying all AI systems used in performance (including embedded tools) and reviewing the terms and conditions. This is even more complicated in GenAI systems where the system automatically selects (without the user’s input) certain LLM models when preparing a response. If MAS contractors cannot ensure that the government owns all content submitted to the AI system by or for the government, as well as content generated by the AI system in performance of the contract – which is inconsistent with how most generative, open AI systems operate – then they may be restricted from using such systems, if the proposed clause is adopted. Similarly, if MAS contractors cannot ensure that the government owns all modifications, customizations, configurations or enhancements to the AI systems used, they may be restricted from using such systems. Given the sweeping ownership requirements and mandate to use “American AI Systems,” MAS contractors will face steep hurdles to continued use of AI and may be forced to cease using it.
-------
[1] GSAR 552.239-7001(c).
[2] GSAR 552.239-7001(a).
[3] GSAR 552.239-7001(c).
[4] See GSAR 552.239-7001(b).
[5] GSAR 552.239-7001(d)(1)(i).
[6] See GSAR 552.239-7001(d)(5).
[7] GSAR 552.239-7001(d)(1)(iv).
[8] GSAR 552.239-7001(d)(1)(i).
[9] GSAR 552.239-7001(d)(2).
[10] GSAR 552.239-7001(d)(2).
[11] GSAR 552.239-7001(d)(2)(ii).
[12] GSAR 552.239-7001(e)(1).
[13] GSAR 552.239-7001(e)(1) (i.e., AI systems “developed and produced” in the United States).
[14] GSAR 552.239-7001(e)(1).
[15] GSAR 552.239-7001(d)(4).