AI Use in Virginia Businesses: What to Do Even Without a Virginia AI Law

AI Use in Virginia Businesses: What to Do Even Without a Virginia AI Law

General Information Only. This article is for general informational purposes and does not constitute legal advice. Laws may have changed since publication. Your situation may differ; consult a licensed Virginia attorney about your specific matter.

The information in this article is for general informational purposes only and does not constitute legal advice. Laws change and individual circumstances vary. Consult a licensed Virginia attorney about your specific situation. Reading this article does not create an attorney-client relationship nor does merely contacting our office through this website or any other means.


Artificial intelligence tools have become embedded in the operations of businesses of every size. From small retailers in Christiansburg using AI-powered customer service chatbots to technology companies near Virginia Tech in Blacksburg deploying machine learning in their products, the practical adoption of AI has moved far faster than legislative and regulatory frameworks designed to govern it.

As of 2025, Virginia has not enacted a comprehensive AI governance law. But the absence of a specific Virginia AI statute does not mean businesses are operating in a regulatory vacuum. Federal agency guidance, existing consumer protection law, employment discrimination rules, and international frameworks create a web of obligations that apply to many AI use cases right now.

Virginia’s AI Legislative Landscape

The Virginia General Assembly has considered AI-related legislation in recent sessions. Proposals have addressed topics including algorithmic decision-making in consequential contexts such as employment, credit, and housing. However, as of the publication of this article, Virginia has not enacted a comprehensive AI law comparable to the EU AI Act.

Virginia has enacted specific legislation addressing AI in narrower contexts, including requirements around synthetic media and deepfakes in political advertising and in certain privacy-related contexts. Businesses should monitor the General Assembly’s annual sessions, as this landscape is evolving and a more comprehensive Virginia AI framework could emerge in the near term.

The governor’s office has also used executive orders and guidance documents to address state agency use of AI, which may signal the direction of future legislation even if those orders do not directly bind private businesses.

Federal Framework: FTC, EEOC, and Sector-Specific Rules

Several federal agencies have issued guidance or taken enforcement positions that apply to AI use in business contexts.

FTC Guidelines on AI

The Federal Trade Commission has made clear through guidance documents and enforcement actions that its authority under Section 5 of the FTC Act, prohibiting unfair or deceptive acts or practices, extends to AI-related conduct. Specifically, the FTC has identified the following as areas of concern:

  • Making false or unsubstantiated claims about an AI system’s capabilities
  • Using AI to engage in discriminatory pricing or targeting
  • Deploying AI in ways that exploit consumers’ behavioral biases
  • Failing to disclose when consumers are interacting with AI rather than a human

The FTC has also emphasized that companies cannot disclaim responsibility for AI outputs by pointing to the technology’s autonomous nature. If your business deploys an AI system that produces a deceptive output to a consumer, the FTC may hold your business responsible regardless of whether a human reviewed the specific output.

EEOC Guidance on AI in Hiring

The Equal Employment Opportunity Commission (EEOC) has issued guidance addressing the use of AI tools in employment decisions, including automated applicant screening, resume review, interview scheduling, and performance assessment tools.

The EEOC’s guidance applies existing employment discrimination law to AI contexts. If an AI tool used in hiring has a disparate impact on a protected class, for example, if a resume screening algorithm systematically deprioritizes applicants from certain demographic groups, the employer may face liability under Title VII of the Civil Rights Act, the Americans with Disabilities Act, or the Age Discrimination in Employment Act, even if the employer did not intend to discriminate.

The EEOC has indicated that employers cannot rely on AI vendors to bear responsibility for discriminatory outcomes. Employers that use AI tools in hiring decisions should evaluate those tools for potential disparate impact and document that evaluation.

For businesses in the New River Valley that use applicant tracking systems or AI-powered recruiting tools, this guidance should factor into vendor selection and ongoing vendor review processes.

EU AI Act: Impact on US Businesses

The EU AI Act entered into force in 2024 and is being phased in over several years. Although it is a European regulation, it applies to any provider placing an AI system on the EU market or deploying an AI system that affects EU residents.

Virginia businesses that have European customers, operate subsidiaries in Europe, or deploy AI-powered products accessible to EU residents need to consider the EU AI Act’s requirements. The Act creates a tiered system of obligations based on risk level:

  • Unacceptable risk: Certain AI applications are prohibited outright, including real-time biometric identification in public spaces and social scoring systems.
  • High risk: AI systems in hiring, credit scoring, education, healthcare, and law enforcement are subject to strict requirements including conformity assessments, transparency obligations, and human oversight mechanisms.
  • Limited risk: Chatbots and AI-generated content must disclose that users are interacting with AI.
  • Minimal risk: Most AI applications fall into this category with no specific obligations beyond the general prohibition on deceptive practices.

For US companies, the practical implication is that EU AI Act compliance may function as a de facto global standard, particularly for businesses that want to maintain EU market access.

Practical AI Governance Steps

For Virginia businesses that use AI tools without yet having a formal governance framework, the following steps provide a starting point.

Inventory your AI tools. Compile a list of every AI tool your business uses, including productivity tools with embedded AI features, customer-facing AI, AI used in employment decisions, and AI in your vendor stack. Many businesses that undertake this exercise discover they are using more AI than they realized.

Review vendor terms and contracts. Your AI vendors’ terms of service govern how your data is used to train models, what liability the vendor accepts for AI outputs, and what the vendor does if a system produces a harmful result. Some vendors train their models on customer data by default unless customers opt out. Review these terms and negotiate changes where possible.

Document use cases and decision authority. For each AI use case, document what decisions the AI is involved in, what data it uses, how outputs are reviewed, and who has final decision-making authority. This documentation supports both internal accountability and regulatory defense.

Assess bias risk. For AI tools used in decisions that affect people, particularly in hiring, lending, pricing, and service delivery, assess whether the tool’s outputs could have a disparate impact on protected groups. This may require requesting testing data from vendors or engaging a third-party auditor.

Update privacy notices to disclose AI use. If your business uses AI to analyze personal data, this processing should be disclosed in your privacy notice. The VCDPA requires that privacy notices describe the purposes for which personal data is processed. AI-powered analysis of customer or employee data is a processing purpose that should be disclosed.

Establish a review process for AI-generated content. If your business uses AI to generate content, including customer communications, marketing materials, or reports, implement a human review step before publication. This reduces the risk of inaccurate or inappropriate outputs reaching customers.

A specific concern that has emerged as businesses use AI tools in legal and compliance contexts is the potential effect on attorney-client privilege and work product protection.

When a business uses AI to analyze legal issues, draft legal documents, or assist in legal research, and that process occurs without attorney direction, the resulting materials may not be protected by privilege. The documents could be discoverable in litigation or regulatory proceedings.

If AI tools are being used in connection with a legal matter, internal investigation, or regulatory compliance review, those tools should be used under the direction of legal counsel to preserve the strongest argument for privilege protection.

Additionally, if you submit confidential business or client information to a public AI model, you should understand that your information may be used to train future versions of the model. For businesses handling sensitive client information, this is a meaningful concern.

Planning for What Comes Next

The Virginia AI legislative landscape will almost certainly look different within the next one to two General Assembly sessions. Businesses that build AI governance practices now, before a comprehensive state law is enacted, will be better positioned to demonstrate compliance quickly when requirements arrive.

A governance framework that includes an AI inventory, documented use cases, vendor assessments, bias risk evaluations, and updated privacy notices represents the kind of documented, thoughtful approach that regulators and courts look for when evaluating whether a business has acted responsibly.

For businesses in Christiansburg, Blacksburg, and the broader New River Valley navigating AI adoption, consulting with a Virginia attorney as part of building that framework can help ensure legal risks are identified early.


This article is general information only and is not legal advice. Do not rely on this article to make decisions about your specific situation. Contact Valley Legal or another licensed Virginia attorney to discuss your case. Attorney advertising.

Valley Legal, PLLC is located at 107 Pepper St SE, Christiansburg, Virginia 24073, and serves clients throughout the New River Valley of Virginia, including Montgomery County, Blacksburg, Radford, Pulaski, and surrounding communities.