Close Menu
GeekBlog

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    No, smartphone GPS apps won’t be banned in the UK from 2026 – despite the latest wild TikTok rumors

    August 3, 2025

    Men's Linen Shirt: 2 for $11 + $8 shipping

    August 3, 2025

    Nintendo raising original Switch console prices due to ‘market conditions’

    August 3, 2025
    Facebook X (Twitter) Instagram Threads
    GeekBlog
    • Home
    • Mobile
    • Reviews
    • Tech News
    • Deals & Offers
    • Gadgets
      • How-To Guides
    • Laptops & PCs
      • AI & Software
    • Blog
    Facebook X (Twitter) Instagram
    GeekBlog
    Home»Laptops & PCs»9 things you shouldn’t use AI for at work
    Laptops & PCs

    9 things you shouldn’t use AI for at work

    Michael ComaousBy Michael ComaousAugust 1, 2025No Comments12 Mins Read0 Views
    Share Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
    9 things you shouldn't use AI for at work
    Share
    Facebook Twitter LinkedIn Pinterest Email Copy Link

    9 things you shouldn't use AI for at work

    Mensent Photography / Getty Images

    ZDNET’s key takeaways

    • Sometimes an AI can cause you or your company irreparable harm.
    • Sharing confidential data with an AI could have legal consequences.
    • Don’t let an AI talk to customers without supervision.

    A few weeks ago, I shared with you “9 programming tasks you shouldn’t hand off to AI – and why.” It’s full of well-reasoned suggestions and recommendations for how to avoid having an AI produce code that could ruin your whole day.

    Then, my editor and I got talking, and we realized the whole idea of “when not to use an AI” could apply to work in general. In this article, I present to you nine things you shouldn’t use AI for while at work. This is far from a comprehensive list, but it should make you think.

    Also: This one feature could make GPT-5 a true game changer (if OpenAI gets it right)

    “Always keep in mind that AI isn’t going to read you your Miranda Rights, wrap your personal information in legal protections like HIPAA, or hesitate to disclose your secrets,” said LinkedIn Learning AI instructor Pam Baker, the bestselling author of ChatGPT For Dummies and Generative AI For Dummies.

    “That goes double for work AI, which is monitored closely by your employer. Whatever you do or tell AI can and likely will be used against you at some point.”

    Also: You can use Claude AI’s mobile app to draft emails, texts, and calendar events now – here’s how

    To keep things interesting, read on to the end. There, I share some fun and terrifying stories about how using AI at work can go terribly, horribly, and amusingly wrong.

    Without further ado, here are nine things you shouldn’t do with AI at work.

    1. Handling confidential or sensitive data

    This is an easy one. Every time you give the AI some information, ask yourself how you would feel if it was posted to the company’s public blog or wound up on the front page of your industry’s trade journal.

    This concern also includes information that might be subject to disclosure regulations, such as HIPAA for health information or GDPR for personal data for folks operating in the EU.

    Also: OpenAI teases imminent GPT-5 launch. Here’s what to expect

    Regardless of what the AI companies tell you, it’s best to simply assume that everything you feed into an AI is now grist for the model-training mill. Anything you feed in could later wind up in a response to somebody’s prompt, somewhere else.

    2. Reviewing or writing contracts

    Contracts are designed to be detailed and specific agreements on how two parties will interact. They are considered governing documents, which means that writing a bad contract is like writing bad code. Baaad things will happen.

    Do not ask AIs for help with contracts. They will make errors and omissions. They will make stuff up. Worse, they will do so while sounding authoritative, so you’re more likely to use their advice.

    Also: Anthropic’s AI agent can now automate Canva, Asana, Figma and more – here’s how it works

    Also, the terms of a contract are often governed by the contract. In other words, many contracts say that what’s actually in the contract is confidential, and that if you share the particulars of your agreement with any outside party, there will be dire consequences. Sharing with an AI, as discussed above, is like publishing on the front page of a blog.

    Let me be blunt. If you let an AI work on a contract and it makes a mistake, you (not it) will be paying the price for a long, long time.

    3. Using an AI for legal advice

    You know the trope where what you share with your lawyer is protected information and can’t be used against you? Yeah, your friendly neighborhood AI is not your lawyer.

    As reported in Futurism, OpenAI CEO (and ChatGPT’s principal cheerleader) Sam Altman told podcaster Theo Von that there is no legal confidentiality when using ChatGPT for your legal concerns.

    Earlier, I discussed how AI companies might use your data for training and embed that data in prompt responses. However, Altman took this assertion up a notch. He suggested OpenAI is obligated to share your conversations with ChatGPT if they are subpoenaed by a court.

    Also: What Zuckerberg’s ‘personal superintelligence’ sales pitch leaves out

    Jessee Bundy, a Knoxville-based attorney, amplified Altman’s statement in a tweet: “There’s no legal privilege when you use ChatGPT. So if you’re pasting in contracts, asking legal questions, or asking it for strategy, you’re not getting legal advice. You’re generating discoverable evidence. No attorney/client privilege. No confidentiality. No ethical duty. No one to protect you.”

    She summed up her observations with a particularly damning statement: “It might feel private, safe, and convenient. But lawyers are bound to protect you. ChatGPT isn’t, and can be used against you.”

    4. Using an AI for health or financial advice

    While we’re on the topic of guidance, let’s hit two other categories where highly trained, licensed, and regulated professionals are available to provide advice: healthcare and finance.

    Look, it’s probably fine to ask ChatGPT to explain a medical or financial concept to you as if you were a five-year-old. But when it comes time to ask for real advice that you plan on considering as you make major decisions, just don’t.

    Let’s step away from the liability risk issues and focus on common sense. First, if you’re using something like ChatGPT for real advice, you have to know what to ask. If you’re not trained in these professions, you might not know.

    Also: How Meta’s new AI chatbot could strike up a conversation with you 

    Second, ChatGPT and other chatbots can be spectacularly, overwhelmingly, and almost unbelievably wrong. They misconstrue questions, fabricate answers, conflate concepts, and generally provide questionable advice.

    Ask yourself, are you willing to bet your life or your financial future on something that a people-pleasing robot made up because it thought that’s what you wanted to hear?

    5. Presenting AI-generated work as your own

    When you ask a chatbot to write something for you, do you claim it as your own? Some folks have told me that because they wrote the prompts, the resulting output is a result of their creativity.

    Yeah? Not so much. Webster’s defines “plagiarize” as “to steal and pass off (the ideas or words of another) as one’s own,” and to “use (another’s production) without crediting the source.” The dictionary also defines plagiarize as “to commit literary theft: present as new and original an idea or product derived from an existing source.”

    Also: Is AI overhyped or underhyped? 6 tips to separate fact from fiction

    Does that not sound like what a chatbot does? It sure does “present as new and original an idea…derived from an existing source.” Chatbots are trained on existing sources. They then parrot back those sources after adding a bit of spin.

    Let’s be clear. Using an AI, and saying its output is yours, could cost you your job.

    6. Talking to customers without monitoring the chatter

    The other day, I had a technical question about my Synology server. I filed a support ticket after hours. A bit later, I got an email response from a self-identified support AI. The cool thing was that the answer was complete and just what I needed, so I didn’t have to escalate my ticket to a human helper.

    But not all AI interactions with customers go that well. Even a year and a half later, I’m still chuckling about the Chevy dealer chatbot that offered a $55,000 Chevy Tahoe truck to a customer for a buck.

    It’s perfectly fine to provide a trained chatbot as one support option to customers. But don’t assume it’s always going to be right. Ensure customers have the option to talk with a human. And monitor the AI-enabled process. Otherwise, you could be giving away $1 trucks, too.

    7. Making final hiring and firing solutions

    According to a survey by resume-making app Resume Builder, a majority of managers are using AI “to determine raises (78%), promotions (77%), layoffs (66%), and even terminations (64%).”

    “Why are you firing me?”

    “It’s not my fault. The AI made me do it.”

    Yeah, that. Worse, apparently at least 20% of managers, most of whom haven’t been trained in the rights and wrongs of AI usage, are using AIs to make final employment decisions without even bothering to oversee the AI.

    But here’s the rub. Jobs are often governed by labor laws. Despite the current anti-DEI push coming from Washington, bias can still lead to discrimination lawsuits. Even if you haven’t technically done anything wrong, defending against a lawsuit can be expensive.

    Also: Open-source skills can save your career when AI comes knocking

    If you cause your company to be on the receiving end of a lawsuit because you couldn’t be bothered to be human enough to double-check why your AI was canning Janice in accounting, you’ll be the next one being handed a pink slip. Don’t do it. Just say no.

    8. Responding to journalists or media inquiries

    I’m going to tell you a little secret. Journalists and writers do not exist solely to promote your company. We’d like to help, certainly. It feels good knowing we’re helping folks grow their businesses. But, and you’ll need to sit down for this news, there are other companies.

    We are also busy. I get thousands of emails every day. Hundreds of them are about the newest and by far most innovative AI company ever. Many of those pitches are AI-generated because the PR folks couldn’t be bothered to take the time to focus their pitch. Some of them are so bad that I can’t even tell what the PRs are trying to hawk.

    But then, there’s the other side. Sometimes, I’ll reach out to a company, willing to use my most valuable resource — time — on their behalf. When I get back a response that’s AI-driven, I’ll either move on to the next company (or mock them on social media).

    Also: 5 entry-level tech jobs AI is already augmenting, according to Amazon

    Some of those AI-driven answers are really, really inappropriate. However, because the AI is representing the company instead of, you know, maybe a thinking human, an opportunity is lost.

    Keep in mind that I don’t like publishing things that will cost someone their job. But other writers are not necessarily similarly inclined. A properly run business will not only use a human to respond to the press, but will also limit the humans allowed to represent the company to those properly experienced in what to say.

    Or go ahead and cut corners. I always need fun fodder for my Facebook feed.

    9. Using AI for coding without a backup

    Earlier, I wrote “9 programming tasks you shouldn’t hand off to AI,” which detailed programming tasks you should avoid passing along to an AI. I’ve long been nervous about ceding too much responsibility to an AI, and quite concerned about managing codebase maintenance.

    But I didn’t really understand how far stupid could go when it came to delegating coding responsibility to the AI. I mean, yes, I know AIs can be stupid. And I sure know humans can be stupid. But when AIs and humans work in tandem to advance the cause of their stupidity together, the results can be truly awe-inspiring.

    Also: Trump’s AI plan pushes AI upskilling instead of worker protections – and 4 other key takeaways

    In “Bad vibes: How an AI agent coded its way to disaster,” my ZDNET colleague Steven Vaughan-Nichols wrote about a developer who happily vibe-coded himself to an almost-complete piece of software. First, the AI hard-coded lies about how unit tests performed. Then the AI deleted his entire codebase.

    It’s not necessarily wrong to use AI to help you code. But if you’re using a tool that can’t be backed up, or you don’t bother to back up your code first, you’re simply doing your best to earn a digital Darwin award.

    Bonus: Other examples of what not to do

    Here’s a lightning round of boneheaded moves using AI. They’re just too good (and by good, I mean bad) not to recount:

    • Letting a chatbot manage job applicant data: Remember how we told you not to use an AI for hiring and firing? McDonald’s uses a chatbot to screen applicants. Apparently, the chatbot exposed millions of applicants’ personal information to a hacker who used the password 123456.
    • Replacing support staff with an AI, and gloating: A CEO of e-commerce platform Dukaan terminated 90% of his support staff and replaced them with an AI. Then he bragged about it. On Twitter/X. The public response was less than positive. Way less.
    • Produce a reading list consisting of all fake titles: The Chicago Sun-Times, normally a very well-respected paper, published a summer reading list generated by an AI. The gotcha? None of the books were real.
    • Suggesting terminated employees turn to a chatbot for comfort: An Xbox producer (yes, that’s Microsoft) suggested that ChatGPT or Copilot could “help reduce the emotional and cognitive load that comes with job loss” after Microsoft terminated 9,000 employees. Achievement unlocked.

    What about you? Have you seen an AI go off the rails at work? Have you ever been tempted to delegate a task to a chatbot that, in hindsight, probably needed a human touch? Do you trust AI to handle sensitive data, communicate with customers, or make decisions that affect people’s lives? Where do you draw the line in your work? Let us know in the comments below.


    You can follow my day-to-day project updates on social media. Be sure to subscribe to my weekly update newsletter, and follow me on Twitter/X at @DavidGewirtz, on Facebook at Facebook.com/DavidGewirtz, on Instagram at Instagram.com/DavidGewirtz, on Bluesky at @DavidGewirtz.com, and on YouTube at YouTube.com/DavidGewirtzTV.

    shouldnt work
    Follow on Google News Follow on Flipboard
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Telegram Email Copy Link
    Previous ArticleGoogle’s Powerful New AI Model Can Solve Your Most Complex Problems. If You Can Afford It
    Next Article Google’s Pixel 10 sure looks like it’ll support magnetic Qi2 charging
    Michael Comaous
    • Website

    Related Posts

    5 Mins Read

    What You Should Know About the New, Free Messaging App Bitchat

    13 Mins Read

    TerraMaster D1 SSD Plus Enclosure review

    5 Mins Read

    I love how ChatGPT’s new Study Mode makes me actually use my brain

    3 Mins Read

    You Should Download iOS 18.6 Now to Fix This Zero-Day Flaw

    19 Mins Read

    I tested the viral Sigma BF camera, and its radical redesign has me hooked

    5 Mins Read

    Providence Falls’ most ‘challenging’ scene to film is a non-canon Easter egg The Way Home fans will not want to miss

    Top Posts

    30-Year Fixed-Rate Mortgage Decreases: Mortgage Interest Rates Today for Aug. 1, 2025

    August 1, 202510 Views

    Are There Cordless Vacuums With Replaceable Batteries?

    July 1, 20259 Views

    Deal: Netgear 4G LTE Broadband Modem is just $19.99!

    August 1, 20256 Views
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram
    Latest Reviews

    Subscribe to Updates

    Get the latest tech news from FooBar about tech, design and biz.

    Most Popular

    30-Year Fixed-Rate Mortgage Decreases: Mortgage Interest Rates Today for Aug. 1, 2025

    August 1, 202510 Views

    Are There Cordless Vacuums With Replaceable Batteries?

    July 1, 20259 Views

    Deal: Netgear 4G LTE Broadband Modem is just $19.99!

    August 1, 20256 Views
    Our Picks

    No, smartphone GPS apps won’t be banned in the UK from 2026 – despite the latest wild TikTok rumors

    August 3, 2025

    Men's Linen Shirt: 2 for $11 + $8 shipping

    August 3, 2025

    Nintendo raising original Switch console prices due to ‘market conditions’

    August 3, 2025

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    Facebook X (Twitter) Instagram Pinterest Threads
    • About Us
    • Contact us
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    © 2025 geekblog. Designed by Pro.

    Type above and press Enter to search. Press Esc to cancel.