A practical guide to generative AI: what it is, how to prompt it, and how to protect your data.
What Is Generative AI?
Machine learning and other types of AI have been around for decades. The recent breakthrough that launched a global conversation is generative AI: models that create new, human-like output rather than just matching predefined patterns.
Pre-generative vs. Generative AI
Pre-generative AI used defined patterns to work with structured data.
Example: "If X, then Y" autocomplete in email, or financial modeling.
Generative AI can create new patterns and human-like output from unstructured input.
Example: "Write a 1000-word essay on the work efficiency benefits of AI."
Generating new data means AI can create digital anything: text, code, images, audio, video, and when paired with 3D printing, physical objects. The creative and professional applications are enormous.
If you've used Google to search, you can use tools like ChatGPT, Claude, or Gemini. Think of them as a genius intern: exceptionally capable, best with precise instructions on a need-to-know basis, and whose work you should always review. An intern doesn't have your context and experience, and might make things up to satisfy you.
A genius intern won't replace you for years, and can greatly reduce your workload in the meantime.
Key Terminology
Your instruction to the AI model, such as a question or task in ChatGPT.
How an AI learns. It ingests a large corpus of data, digests it to understand patterns, and later makes its own inferred connections.
What the AI can "remember" in a single conversation. If you exceed the limit, it loses earlier context.
Large Language Model. The technical term for the generative AI programs most people interact with.
A specific version of an AI program optimized for speed, reasoning, or certain tasks like coding. Providers frequently release new models.
AI services that perform multi-stage research with real-time reasoning updates and cited sources. Often takes several minutes and produces long-form outputs.
Software programmed to take multiple steps autonomously: it creates its own sub-prompts and often uses other tools or apps to complete tasks.
Access and Cost
If you're a small or solo outfit, you're not spending five or six figures on bespoke, in-house models. You probably don't need to anyway; niche AI services have little advantage without substantial customization and ongoing maintenance (in-house data, custom agents, periodic updates).
Instead, for roughly $20/month you can subscribe to the most powerful knowledge generators in history. All major LLMs provide robust free tiers with rapidly improving capabilities.
The magic is in your prompting abilities, not in expensive software.
Specialties shift quickly, and maintaining data privacy with all AI models (especially free versions of services) is essential.
How to Prompt
For software and vibe-coding workflows, the same structure below applies; see Varia’s Vibe Coding Guide for builder-focused tips, security pitfalls, and legal considerations.
Invest the most time in your initial prompt to avoid backtracking later. AI's output is probabilistic—you'll get different outputs for the same input, like a restaurant rather than a vending machine. Don't be afraid to ask the same question multiple times or to multiple models, or even ask them to critique each other. You can also include your own writing as context so the output matches your style.
Bonus: XML Tagging
You can use XML-like tags to guide the model and structure the output more effectively:
<role>
You are an expert _______.
</role>
<task>
Create ________.
</task>
<instructions>
1. Read the attached context and this prompt fully, and ask
any clarifying questions before responding.
2. ______
3. Ensure any research you perform is cited.
</instructions>
<context>
Attached is _______.
</context>
Prompting Examples
Text analysis—summarize, compare, or extract key provisions from documents:
Form generation—draft standard forms with specific parameters:
Negotiations, strategy, and issue spotting—for these tasks, provide less emphasis on rigid instructions and more on the role, context, and task. This allows the model to be more creative and genuinely helpful, especially as a mocked counterparty. Overly precise instructions can make it predictable and similar to your own thinking.
Try modeling human conditions
It may feel strange, but providing human context makes outputs more realistic and useful:
- Prior interactions—"You have a reputation for starting negotiations with a minor concession then becoming aggressive."
- Perceived importance—"This client is one of your firm's largest, and you're eager to show a win."
- Mood and circumstances—"You're irritable after a red-eye flight and too much coffee."
- Time pressure—"It's 5:30 PM on a Friday."
Substantive but subjective prompts like these are even more effective with "reasoning" or "thinking" models. Remember: example categories overlap. You can issue-spot areas of legal liability and then ask for a form, or analyze a document and then generate a strategy memo.
Privacy and Confidentiality
Always assume the worst. These are technology companies with unprecedented compute power, and there's a reason they offer free versions: they use your data to improve and train their models.
Even when they have a strong data privacy policy, legislation, regulatory guidance, and judicial orders can force their hand. These models don't just collect and repeat data—they can extrapolate more information and even codify falsehoods. Once models have absorbed and trained on data, assume it's practically impossible to make them "forget" it. This is true even for solely in-house AI models.
Why this matters for law firms
Conflicts-related ethical walls, terminated client relationships, or GDPR-compliance deletion requests are all problems when data is baked into a model. The consequences are not hypothetical.
Prompt content
What you submit: text, files, images, or pasted context containing client data.
Request metadata
How it travels: routing, logs, IP identifiers, and conversation metadata.
Retention & training
What is stored: provider policies, model training, backups, and compelled disclosures.
The solution is straightforward: ensure AI never sees private, confidential, or sensitive data. Tools like CamoText can help you efficiently protect sensitive text before it reaches any model:
What to Tell Your Clients
Tell them the truth: you're using AI to be more efficient, without compromising your judgment, ethics, or privacy obligations.
If they're using AI services, remind them of the severe hallucination and privacy risks. AI models can and do fabricate case citations, statutes, and facts with total confidence. If clients are using AI for legal-adjacent work, suggest they use it for general purposes like honing questions and requests for you.
Efficient intake and empowered clients who appreciate reduced bills is a win-win.
Best Practices
- Remove sensitive data before AI absorbs it—use tools like CamoText
- Invest time in creating good prompts—Role + Task + Instructions + Context—and don't be afraid to try again or ask multiple models
- Review and verify the output—you are ultimately responsible for accuracy
- Remember: genius, but just an intern—it can't replace your judgment and experience
- Start with low-stakes tasks like issue-spotting, quick second opinions, and document summaries before using it for client-facing work
- Share these practices with your clients and colleagues
An extraordinary amount of intellectual power, with near-instant response time, reasoning, and research capabilities, for inexpensive subscriptions. Low downside (takes seconds and you're reviewing the output anyway) and extremely high upside.
You're not getting replaced by AI anytime soon—you're getting superpowered.
However, you might get outpaced by others using this technology if you don't.
Also, it's fun. Talk to it like an interactive academic course or a podcast, learn a new skill or language with an infinitely patient and personalized assistant, or have it create an outline for a presentation.