Using AI Safely & Smartly
AI tools are genuinely powerful — and genuinely imperfect. Understanding both sides is what separates a confident AI user from a frustrated one. This guide covers everything you need to know to use AI safely, especially for health research.
— Gary Stouffer
Common AI Errors to Know About AI makes mistakes — and it makes them confidently. Here's what to watch for.
The most important thing to understand about AI is this: it doesn't know what it doesn't know. It can give you a completely wrong answer in the same calm, confident tone as a correct one. That's not dangerous — as long as you know it. Here are the six errors you'll encounter most.
Hallucination
AI sometimes invents facts — doctors, studies, statistics, drug names — that do not exist. It presents them as real with complete confidence.
Outdated Information
Every AI has a "knowledge cutoff" — a date after which it knows nothing. Medical guidelines, drug interactions, and dosage recommendations change regularly.
Overconfident Answers
AI rarely says "I'm not sure." It will give you a specific answer even when uncertainty would be more honest. Watch for this especially with dosages and diagnoses.
Mixing Up Similar Things
AI can confuse drugs with similar names, conditions with overlapping symptoms, or procedures with similar descriptions — especially in medical contexts.
Wrong Statistics
Numbers, percentages, survival rates, and study outcomes are frequently misquoted or invented. Never use an AI-provided statistic without verifying it at the original source.
Wrong Location / Context
AI may give you information based on guidelines from another country, or for a different patient population than yours — without flagging that it's doing so.
How to Verify What AI Tells You Seven checks that help you separate reliable information from AI errors.
The good news: AI is an excellent starting point for research, even if it's an unreliable final answer. These seven verification habits will help you use AI confidently without being misled by it.
Ask AI to Show Its Work
Ask the AI to explain where it got its information. A good AI will acknowledge when it's uncertain. If it cites a specific study or source, write it down — then go verify it exists.
Ask a Second AI the Same Question
Use two different AI tools — for example, ask Claude first, then ask ChatGPT the same question. If they agree, that's a good sign. If they disagree, dig deeper. Agreement isn't proof, but disagreement is a clear warning to verify.
Ask AI to Argue Against Itself
After getting an answer, ask: "What are the strongest reasons someone might disagree with this?" or "What could be wrong about what you just told me?" This forces AI to surface its own blind spots.
Verify Every Number at the Original Source
Never trust a statistic, percentage, dosage, or study result from AI alone. Go directly to the original source — Mayo Clinic, NIH, CDC, or the journal where the study was published. If you can't find the source, the number doesn't exist for your purposes.
Search the Medical Source Directly
For any health information, go to Mayo Clinic (mayoclinic.org), NIH (nih.gov), or PubMed (pubmed.ncbi.nlm.nih.gov) and search there independently of what AI told you. If what you find matches what AI said, you can use it with confidence.
Check the AI's Knowledge Cutoff Date
Ask the AI when its training data ends. Anything medical, legal, or financial that has changed since that date will not be reflected in its answers. For recent guideline changes, drug approvals, or new treatments, always go to an updated medical source.
Use AI to Generate Questions, Not Final Answers
The best use of AI in medical research is to help you build a list of smart questions to bring to your doctor — not to give you answers your doctor should be giving you. AI is an excellent research assistant and a poor diagnostician.
Rules for Medical Research With AI Follow these every time — without exception.
Medical research is where AI can be most helpful — and where mistakes carry the most serious consequences. These rules protect you. Gary follows every one of them in his Medical Research Assistance consultations.
Gary's Medical Research Rules — Non-Negotiable
Rule 1: AI is a Research Tool. Your Doctor is Your Doctor.
AI can help you understand, prepare, and ask better questions. It cannot examine you, review your full medical history, or account for how your medications interact. Never make a medical decision based solely on what AI tells you.
Rule 2: Always Name Your Condition and Medications Specifically
Vague questions get vague answers. The more specific you are — your age, your diagnosis, the exact medication name and dose — the more relevant and accurate the AI's response will be. General answers are less useful and more likely to lead you astray.
Rule 3: Verify Everything at a Credible Medical Source
Every piece of medical information from AI must be verified at a trusted source before you act on it or share it. Use Mayo Clinic, NIH, Cleveland Clinic, CDC, or MedlinePlus — not general Google searches or health blogs.
Rule 4: Never Adjust Medications Based on AI Advice
Do not change your dose, stop a medication, or add a supplement based on what AI tells you — even if it sounds authoritative. Drug interactions and individual tolerance require a licensed pharmacist or physician to evaluate properly.
Rule 5: Use AI to Prepare, Not to Replace, Your Appointment
The ideal medical AI workflow: research before your appointment to understand the topic, generate smart questions to ask your doctor, and bring the research summary with you for discussion. AI helps you be a more informed patient — not a self-treating one.
Rule 6: Ask AI to Flag What It Doesn't Know
Explicitly ask: "What aspects of this topic are you uncertain about?" and "What would a doctor need to know about me personally to give a more accurate answer?" This surfaces the limitations AI won't volunteer on its own.
Rule 7: Bring Your Research to Your Doctor — Don't Hide It
Many patients research with AI and then don't mention it to their doctor. Do the opposite: tell your doctor what you found and where. Most physicians appreciate informed patients. Saying "I read on Mayo Clinic that..." is a conversation starter. Saying "AI told me to..." is less helpful — lead with the verified source.
Rule 8: For Emergencies, Call 911 — Not AI
AI is not a substitute for emergency medical care. If you are experiencing chest pain, difficulty breathing, stroke symptoms, or any medical emergency — call 911 immediately. Do not stop to ask AI what to do.
Important Disclaimer: All AI-assisted research provided through Stouffer AI Consulting is research assistance only — not medical advice. Gary Stouffer is not a physician and does not diagnose, prescribe, or recommend treatment. All research findings must be reviewed with your qualified healthcare provider before making any medical decision.
AI Scams to Watch Out For Criminals are using AI to target retirees. Know the warning signs.
As AI has become mainstream, scammers have followed. Some of the most sophisticated fraud targeting retirees now involves AI-generated voices, videos, and messages. Here are the ones you need to know about.
AI Voice Cloning Calls
Scammers record a few seconds of a family member's voice from social media, then use AI to clone it. They call you pretending to be your child or grandchild in an emergency — needing bail money, hospital fees, or emergency wire transfers.
Fake AI Tool Websites
Fraudulent websites pretend to offer ChatGPT, Claude, or Gemini — but are designed to steal your credit card or login information. Some charge subscription fees for tools that are available free or cheap at the real websites.
AI "Health Cure" Scams
Scammers use AI-generated fake testimonials, fake doctors, and fake research papers to sell worthless supplements, treatments, or "miracle cures." The AI-generated content looks very polished and professional — and is entirely fabricated.
Deepfake Video Scams
AI can now generate convincing videos of real people saying things they never said — including doctors, celebrities, or even government officials "endorsing" a product or asking you to send money. If a video is urging you to act quickly, be suspicious.
AI-Generated Phishing Emails
Scammers now use AI to write phishing emails with perfect grammar and spelling — eliminating the typos that used to make fraud obvious. Emails may appear to be from your bank, Medicare, the IRS, or even your own doctor's office.
"AI Investment" Fraud
Scammers claim their investment platform uses AI to generate guaranteed returns. They may show you impressive (fake) dashboards and even let you "withdraw" small amounts at first to build trust — before disappearing with larger deposits.
Trusted Sources for Medical Research Go here to verify what AI tells you about your health.
These are the sources Gary uses and recommends in every Medical Research Assistance consultation. Bookmark these on your browser — they should be your first stop after any AI health research session.
| Source | Website | Best For |
|---|---|---|
| Mayo Clinic | mayoclinic.org | Plain-English explanations of conditions, medications, and procedures. Excellent for understanding diagnoses. |
| NIH (National Institutes of Health) | nih.gov | Authoritative government health research. Use for treatment guidelines and research summaries. |
| MedlinePlus | medlineplus.gov | NIH's plain-language health library. Excellent drug information, condition overviews, lab test explanations. |
| Cleveland Clinic | my.clevelandclinic.org | Highly readable health articles especially strong on heart disease, cancer, and chronic conditions. |
| CDC | cdc.gov | Vaccines, infectious disease, prevention guidelines, and public health data. |
| PubMed | pubmed.ncbi.nlm.nih.gov | Peer-reviewed medical research. Use to verify specific studies AI claims to cite. Search by study title. |
| FDA | fda.gov | Drug approvals, medication warnings, supplement safety, and medical device information. |
| DailyMed (NIH) | dailymed.nlm.nih.gov | Official prescribing information for every FDA-approved drug — the most authoritative drug database available. |
Gary's 10 Golden Rules for Safe AI Use
Print this out. Put it next to your computer.
- ✓AI is a starting point — always verify what it tells you.
- ✓For health information, verify at Mayo Clinic, NIH, or Cleveland Clinic before acting.
- ✓Never change a medication or dose based on AI advice alone.
- ✓When AI cites a study, find it yourself before trusting the number.
- ✓Ask two different AI tools the same question and compare answers.
- ✓Ask AI what it's uncertain about — it won't always volunteer that information.
- ✓Only use real AI tools: claude.ai, chatgpt.com, gemini.google.com.
- ✓If someone asks for money or personal info unexpectedly — stop and call a family member first.
- ✓Use AI to build your question list for your doctor — not to replace your doctor.
- ✓In a medical emergency: call 911 first. AI second — if at all.
Want to Practice This With Someone Beside You?
In a personal consultation, Gary walks through safe AI research techniques with you using your own health questions — so you leave confident, not just informed.
Book a Personal Consultation