AI Safety Guide

Using AI Safely & Smartly

AI tools are genuinely powerful — and genuinely imperfect. Understanding both sides is what separates a confident AI user from a frustrated one. This guide covers everything you need to know to use AI safely, especially for health research.

"I trust AI as a starting point, never as a final answer — especially when it comes to my health. That's the most important rule I teach."
— Gary Stouffer
⚠️

Common AI Errors to Know About AI makes mistakes — and it makes them confidently. Here's what to watch for.

The most important thing to understand about AI is this: it doesn't know what it doesn't know. It can give you a completely wrong answer in the same calm, confident tone as a correct one. That's not dangerous — as long as you know it. Here are the six errors you'll encounter most.

🌀

Hallucination

AI sometimes invents facts — doctors, studies, statistics, drug names — that do not exist. It presents them as real with complete confidence.

Example: You ask for studies on a supplement. AI cites a "2021 Johns Hopkins study." You Google it. It doesn't exist.
📅

Outdated Information

Every AI has a "knowledge cutoff" — a date after which it knows nothing. Medical guidelines, drug interactions, and dosage recommendations change regularly.

Example: AI tells you a medication is "generally safe" — but guidelines were updated 8 months ago with new warnings.
🎯

Overconfident Answers

AI rarely says "I'm not sure." It will give you a specific answer even when uncertainty would be more honest. Watch for this especially with dosages and diagnoses.

Example: You ask "Is 10mg of X safe at my age?" AI says yes — without knowing your kidney function, other medications, or full history.
🔄

Mixing Up Similar Things

AI can confuse drugs with similar names, conditions with overlapping symptoms, or procedures with similar descriptions — especially in medical contexts.

Example: You ask about "Hydroxyzine" and AI partially answers as if you asked about "Hydroxychloroquine" — a completely different drug.
📊

Wrong Statistics

Numbers, percentages, survival rates, and study outcomes are frequently misquoted or invented. Never use an AI-provided statistic without verifying it at the original source.

Example: AI says "studies show 78% improvement" — but the actual study showed 38%, or no study exists at all.
🌍

Wrong Location / Context

AI may give you information based on guidelines from another country, or for a different patient population than yours — without flagging that it's doing so.

Example: AI gives you UK NHS dosage guidance when you're in the US, where recommendations differ.

How to Verify What AI Tells You Seven checks that help you separate reliable information from AI errors.

The good news: AI is an excellent starting point for research, even if it's an unreliable final answer. These seven verification habits will help you use AI confidently without being misled by it.

1

Ask AI to Show Its Work

Ask the AI to explain where it got its information. A good AI will acknowledge when it's uncertain. If it cites a specific study or source, write it down — then go verify it exists.

Try this: "Where does this information come from? Are you certain, or is this your best estimate?"
2

Ask a Second AI the Same Question

Use two different AI tools — for example, ask Claude first, then ask ChatGPT the same question. If they agree, that's a good sign. If they disagree, dig deeper. Agreement isn't proof, but disagreement is a clear warning to verify.

Try this: Copy your question into both Claude (claude.ai) and ChatGPT (chatgpt.com) and compare the answers side by side.
3

Ask AI to Argue Against Itself

After getting an answer, ask: "What are the strongest reasons someone might disagree with this?" or "What could be wrong about what you just told me?" This forces AI to surface its own blind spots.

Try this: "Now give me the counterargument. What would a skeptical doctor say about this information?"
4

Verify Every Number at the Original Source

Never trust a statistic, percentage, dosage, or study result from AI alone. Go directly to the original source — Mayo Clinic, NIH, CDC, or the journal where the study was published. If you can't find the source, the number doesn't exist for your purposes.

Try this: "Give me the exact URL or publication title for the source of that statistic."
5

Search the Medical Source Directly

For any health information, go to Mayo Clinic (mayoclinic.org), NIH (nih.gov), or PubMed (pubmed.ncbi.nlm.nih.gov) and search there independently of what AI told you. If what you find matches what AI said, you can use it with confidence.

Try this: Take the medication or condition AI mentioned and search it directly on mayoclinic.org.
6

Check the AI's Knowledge Cutoff Date

Ask the AI when its training data ends. Anything medical, legal, or financial that has changed since that date will not be reflected in its answers. For recent guideline changes, drug approvals, or new treatments, always go to an updated medical source.

Try this: "When does your training data end? Could current guidelines on this topic be different from what you're telling me?"
7

Use AI to Generate Questions, Not Final Answers

The best use of AI in medical research is to help you build a list of smart questions to bring to your doctor — not to give you answers your doctor should be giving you. AI is an excellent research assistant and a poor diagnostician.

Try this: "Based on this condition, what are the five most important questions I should ask my doctor at my next appointment?"
🏥

Rules for Medical Research With AI Follow these every time — without exception.

Medical research is where AI can be most helpful — and where mistakes carry the most serious consequences. These rules protect you. Gary follows every one of them in his Medical Research Assistance consultations.

📋

Gary's Medical Research Rules — Non-Negotiable

🚫

Rule 1: AI is a Research Tool. Your Doctor is Your Doctor.

AI can help you understand, prepare, and ask better questions. It cannot examine you, review your full medical history, or account for how your medications interact. Never make a medical decision based solely on what AI tells you.

📖

Rule 2: Always Name Your Condition and Medications Specifically

Vague questions get vague answers. The more specific you are — your age, your diagnosis, the exact medication name and dose — the more relevant and accurate the AI's response will be. General answers are less useful and more likely to lead you astray.

🔗

Rule 3: Verify Everything at a Credible Medical Source

Every piece of medical information from AI must be verified at a trusted source before you act on it or share it. Use Mayo Clinic, NIH, Cleveland Clinic, CDC, or MedlinePlus — not general Google searches or health blogs.

💊

Rule 4: Never Adjust Medications Based on AI Advice

Do not change your dose, stop a medication, or add a supplement based on what AI tells you — even if it sounds authoritative. Drug interactions and individual tolerance require a licensed pharmacist or physician to evaluate properly.

📝

Rule 5: Use AI to Prepare, Not to Replace, Your Appointment

The ideal medical AI workflow: research before your appointment to understand the topic, generate smart questions to ask your doctor, and bring the research summary with you for discussion. AI helps you be a more informed patient — not a self-treating one.

🔍

Rule 6: Ask AI to Flag What It Doesn't Know

Explicitly ask: "What aspects of this topic are you uncertain about?" and "What would a doctor need to know about me personally to give a more accurate answer?" This surfaces the limitations AI won't volunteer on its own.

👨‍⚕️

Rule 7: Bring Your Research to Your Doctor — Don't Hide It

Many patients research with AI and then don't mention it to their doctor. Do the opposite: tell your doctor what you found and where. Most physicians appreciate informed patients. Saying "I read on Mayo Clinic that..." is a conversation starter. Saying "AI told me to..." is less helpful — lead with the verified source.

🆘

Rule 8: For Emergencies, Call 911 — Not AI

AI is not a substitute for emergency medical care. If you are experiencing chest pain, difficulty breathing, stroke symptoms, or any medical emergency — call 911 immediately. Do not stop to ask AI what to do.

⚠️

Important Disclaimer: All AI-assisted research provided through Stouffer AI Consulting is research assistance only — not medical advice. Gary Stouffer is not a physician and does not diagnose, prescribe, or recommend treatment. All research findings must be reviewed with your qualified healthcare provider before making any medical decision.

🛡️

AI Scams to Watch Out For Criminals are using AI to target retirees. Know the warning signs.

As AI has become mainstream, scammers have followed. Some of the most sophisticated fraud targeting retirees now involves AI-generated voices, videos, and messages. Here are the ones you need to know about.

📞

AI Voice Cloning Calls

Scammers record a few seconds of a family member's voice from social media, then use AI to clone it. They call you pretending to be your child or grandchild in an emergency — needing bail money, hospital fees, or emergency wire transfers.

⚠ Red flag: Caller needs money urgently and asks you not to tell anyone else.
🤖

Fake AI Tool Websites

Fraudulent websites pretend to offer ChatGPT, Claude, or Gemini — but are designed to steal your credit card or login information. Some charge subscription fees for tools that are available free or cheap at the real websites.

⚠ Red flag: Any AI tool website that isn't claude.ai, chatgpt.com, or gemini.google.com.
💊

AI "Health Cure" Scams

Scammers use AI-generated fake testimonials, fake doctors, and fake research papers to sell worthless supplements, treatments, or "miracle cures." The AI-generated content looks very polished and professional — and is entirely fabricated.

⚠ Red flag: Dramatic health claims, "doctors" you can't verify, no mention of side effects.
🎥

Deepfake Video Scams

AI can now generate convincing videos of real people saying things they never said — including doctors, celebrities, or even government officials "endorsing" a product or asking you to send money. If a video is urging you to act quickly, be suspicious.

⚠ Red flag: Video of a famous person you know personally endorsing an investment or health product.
💌

AI-Generated Phishing Emails

Scammers now use AI to write phishing emails with perfect grammar and spelling — eliminating the typos that used to make fraud obvious. Emails may appear to be from your bank, Medicare, the IRS, or even your own doctor's office.

⚠ Red flag: Urgent action required, link in email, request for personal or financial information.
💰

"AI Investment" Fraud

Scammers claim their investment platform uses AI to generate guaranteed returns. They may show you impressive (fake) dashboards and even let you "withdraw" small amounts at first to build trust — before disappearing with larger deposits.

⚠ Red flag: Guaranteed returns, pressure to invest more, can't easily withdraw your money.
🔒

Gary's Golden Rule for Scam Prevention: If someone contacts you unexpectedly and asks for money, personal information, or account access — hang up, close the tab, or don't reply. Then call the person or organization directly using a number you find yourself. Never use a number or link provided by the person who contacted you. When in doubt, call a family member before doing anything.

📚

Trusted Sources for Medical Research Go here to verify what AI tells you about your health.

These are the sources Gary uses and recommends in every Medical Research Assistance consultation. Bookmark these on your browser — they should be your first stop after any AI health research session.

Source Website Best For
Mayo Clinic mayoclinic.org Plain-English explanations of conditions, medications, and procedures. Excellent for understanding diagnoses.
NIH (National Institutes of Health) nih.gov Authoritative government health research. Use for treatment guidelines and research summaries.
MedlinePlus medlineplus.gov NIH's plain-language health library. Excellent drug information, condition overviews, lab test explanations.
Cleveland Clinic my.clevelandclinic.org Highly readable health articles especially strong on heart disease, cancer, and chronic conditions.
CDC cdc.gov Vaccines, infectious disease, prevention guidelines, and public health data.
PubMed pubmed.ncbi.nlm.nih.gov Peer-reviewed medical research. Use to verify specific studies AI claims to cite. Search by study title.
FDA fda.gov Drug approvals, medication warnings, supplement safety, and medical device information.
DailyMed (NIH) dailymed.nlm.nih.gov Official prescribing information for every FDA-approved drug — the most authoritative drug database available.

Gary's 10 Golden Rules for Safe AI Use

Print this out. Put it next to your computer.

  • AI is a starting point — always verify what it tells you.
  • For health information, verify at Mayo Clinic, NIH, or Cleveland Clinic before acting.
  • Never change a medication or dose based on AI advice alone.
  • When AI cites a study, find it yourself before trusting the number.
  • Ask two different AI tools the same question and compare answers.
  • Ask AI what it's uncertain about — it won't always volunteer that information.
  • Only use real AI tools: claude.ai, chatgpt.com, gemini.google.com.
  • If someone asks for money or personal info unexpectedly — stop and call a family member first.
  • Use AI to build your question list for your doctor — not to replace your doctor.
  • In a medical emergency: call 911 first. AI second — if at all.

Want to Practice This With Someone Beside You?

In a personal consultation, Gary walks through safe AI research techniques with you using your own health questions — so you leave confident, not just informed.

Book a Personal Consultation