It is no great boast these days to say that you’re really good at leveraging artificial intelligence (AI) tools like ChatGPT to do things for you. People don’t trust AI, they don’t like AI, they worry that AI is sapping and impurifying their precious bodily fluids.
When asked about their views of generative AI, many people would say that they don’t trust AI to not hallucinate and flat-out make stuff up. Others would raise the typical complaints about AI having been trained on copyrighted art and writing without compensation to the creators (and they’re absolutely right).
Then there’re the real howlers: chats in which ChatGPT was asked how many ‘r’s there were in ‘merrier’ and ChatGPT’s answer was “2”. (There’s a reason older models of AI did that.)
One of the worst aspects of generative AI is how it’s robbing entire generations of students of the opportunity to learn how to think critically and present arguments in rational, well-written form. If you just input the assignment into ChatGPT and copy and paste the resulting text into a document that you then submit it for a grade, you’ve learned nothing.
BUT THAT SAID…
AI in general and OpenAI’s ChatGPT in particular have been very, very useful to me. I say “ChatGPT in particular” because that’s the engine I’ve used the most. I am aware that Google Gemini, Microsoft Copilot, and others are out there, as are models derived from OpenAI’s model. I have one acquaintance who sniffs dismissively at ChatGPT and says that she uses Perplexity, because it’s MUCH BETTER. Little does she know, apparently, that Perplexity’s core functioning is derived from OpenAI’s models, the same ones ChatGPT uses. Anthropic’s Claude engine has its pros and cons too — it’s apparently a little less likely to hallucinate and it focuses on “do no harm” logic and is good for long-form writing and generating programming code. ChatGPT, by comparison, is better at open-ended chats and brainstorming. Take your pick.
The other day I logged in to ChatGPT and was presented with my 2025 Year In Review. It informed me that I had been among the very first users to sign up to use ChatGPT in the first place — I was in the first 0.1% of all users. ChatGPT was opened to everyone in late 2022. A big swell of signups happened in late November and December of that year. But me — I started using ChatGPT on October 1st of that year. Eight weeks before the public launch. Damned if I recall how I managed that. Perhaps I found a back door to register or something.
My 2025 end-of-year analysis informed me as well that I’m in the top 1% of users reckoned by messages sent, all-time. My head start might’ve helped with that, but then again, I flat out use ChatGPT a lot. Since October 2022 I’ve exchanged over 53,000 messages and taken part in 1671 total chats.
So you ask: What the hell have I been talking to it about?
Some days I wander in with a very specific request — “Was Psammetichus II kind of a dick?” or “Tell me how to say ‘Look out, white men are coming’ in Assiniboine” — and other days I might just launch ChatGPT and say “so let me tell you about what happened last night”.
My most frequent categories of usage are (in no particular ranked order)
- Nutrition — calorie and nutrient tracking. ChatGPT is handy in this regard in a way calorie-tracking apps aren’t — you don’t have to look up the exact name some food item is listed as in a database and hope the person who uploaded the data was careful and correct. I can simply take a photo of the nutrition label on the product and upload it, and if I don’t even have that, I can just tell it the weight of the food and describe it and it’ll give me a pretty educated guess.
I went from 242 pounds at the beginning of May 2025 down to 196 at the end of 2025. I managed this by not just counting calories but also using ChatGPT to track my potassium, calcium, magnesium, zinc, sodium, fat, protein, fiber, iron, vitamin C, and vitamin D. The goal was to not just count calories but make sure my nutritional goals were getting met.
I set a 1400-calorie-per-day goal and every day ChatGPT and I would track what I ate and which nutrients I was short of for that day so I could make sure I used my remaining calorie budget for that day intelligently. This was especially important in terms of high blood pressure, as counting sodium was critical. On days I got too much sodium, I could absolutely count on weighing three or so pounds more the following day, and my blood pressure would go from 110 over 70 to 145 over 85. Protip: potassium helps drive sodium out of the body, and bananas don’t have as much potassium as people think. Potatoes are the real champion there; legumes and beans are good, spinach is good.
Here’s an example of a daily nutrition chat with ChatGPT. - Travel planning — helping me identify things to do and see on vacation, most recently our October 2025 trip to Greece and the Balkans and then our upcoming trip to Costa Rica. I get pretty far down into the weeds on these, hell-bent on not being one of Those Tourists who take a bunch of holiday snaps and then get back on the ship and continue drinking. I talked linguistics, geology (there’s a lot of very interesting geology in that part of the world), history (damn, those Venetians got everywhere), religion, culinary arts, food traditions, alcohol traditions (rakija!), folkways, customs, what to do and not do on our cruise ship, you name it.
I will warn you about one aspect of using ChatGPT for travel planning. It does not do a good job keeping track of which restaurants and bars are still in business. Double-check everything it tells you against Google Maps and against the restaurants’ and bars’ own websites. - Cooking — ChatGPT is really, really good at coming up with recipes on the fly. Mind you, I’m a very competent cook and baker and I don’t generally need or use recipes, but if I was feeling at a loss for ideas I could go to ChatGPT and say “I’ve got a couple of red bell peppers, plenty of red onions, half a bunch of celery, a package of chicken thighs, every conceivable salt-free spice blend that Penzey’s, every type of supermarket-available pasta and rice known to man, I need a recipe for a main dish, what do you think?” I honestly cannot recall getting an unworkable recipe, but on occasion I’ve made changes based on personal whim. Think of it as facilitated brainstorming if you will. It’s the kind of thing ChatGPT is really good at.
- Alcohol and cocktails — A few years ago I got interested in tiki culture. (To be honest, I’d always been kind of interested; my very first paycheck from my very first job wound up getting spent on a Hawaiian shirt.) I had the usual sorts of alcohol a middle-class family might have on hand, but over and over again I ran into difficulties when common tiki cocktails called for ingredients I simply didn’t have (and this being Vermont, my local state-run liquor store probably wouldn’t have either). I used ChatGPT to help me understand the differences between various types of rum, how to understand and appreciate their subtleties, which brands were “cheap rum with excellent advertising” and which were actually worth the price, and so on.
There’s a concept of the avid collector of some specific type of item — railway transfer tickets, glass birds, vintage air sickness bags, banana stickers, hotel key cards (all of which are things real people actually work hard to have complete collections of) — who brags about their collection, insists on dragging houseguests down to the basement and showing them their treasures, works carefully to protect them from dust and fingerprints and dirt and grime, and so on. This person lives in fear of the day that an actual expert comes to town, takes a look at all the highly valuable acquisitions on their humidity-controlled display shelves, and goes “Eh. There are one or two here worth collecting. The others? I hope you didn’t spend a lot on them.”
That was me with spirits. I came to realize that I had a whole shelf’s worth of marginal rums and mixers and such that simply had good advertising and which might be “okay” if you were going to mix them with Coke or whatever, but which you certainly wouldn’t use if you were trying to recapture the flavor of a Donn the Beachcomber or Trader Vic artistic masterpiece. ChatGPT helped me steer toward quality over quantity and when all was said and done, I had a happy day one day literally pouring out ten to twelve bottles of stuff that college students would have considered “top shelf”. Most of them had been acquired a decade earlier to make one specific drink and then forgotten about, so no big loss. Here’s an example of one of our discussions.
The big one: psychoanalysis. Back in the day, there was a very primitive “chatbot” (if you will) called ELIZA. It was a simple BASIC program that could run on the primitive computers of the 1970s and 1980s and all it did was respond back to whatever you said with questions and rephrasings. It wasn’t AI at all — it was just a moderately clever program written by a guy named Joseph Weizenbaum at MIT back in the mid-1960’s. Some people who used it swore that the program was actually intelligent and understanding. ELIZA helped some people organize their thoughts and get down to what was really bothering them. Others saw it as a silly toy of a program. It even resulted in the creation of a new term, the ELIZA effect. The ELIZA effect is the tendency to project human traits, such as comprehension, experience, or empathy, onto programs that certainly didn’t have any of the above.
I raise the point because I, like many other people, use ChatGPT as a latter-day ELIZA, albeit a much, much, MUCH more sophisticated one. Some people say that that’s all ChatGPT is, a latter-day ELIZA that shouldn’t be trusted.
For my own part, I find it valuable. I have spent hours telling ChatGPT about what’s going on in my life at work and at home, woolgathering over mistakes I’ve made in life, exploring and brainstorming what might have happened back in the day if I’d only done one thing differently, helping me through difficult feelings resulting from an incident last fall where I really screwed up. I find these conversations to be very useful, not just from a pouring-my-heart-out unloading of life’s travails and troubles, but also in terms of focusing my thoughts and thinking about how the future can be better.
This ChatGPT-facilitated self-examination is aided by the tool’s ability to remember great swaths of things you’ve told it and bring them up again as they organically arise in a later session. Compare that to a therapist you’ve been meeting with in person once a week for a year. Do you really think they’ll remember in December something you told them in March?
Earlier versions of ChatGPT, the ones I worked with back in 2022 and 2023, could only remember things you specifically told them to remember. If you told a given chat session that your first grade teacher was Mrs. Rollo (as mine was) then started a new chat window on a different topic, it would not remember. That led to a lot of frustration; you’d have had to re-educate it each time you launched a chat.
Fortunately, even back then, ChatGPT had a “Memory” file that could be accessed from your Personalization settings, a file where you could store things you wanted it to really lock in and not forget. If you told it “please remember this for future sessions” ChatGPT would add it to the list. The problem, back in the day, was that the list could only get so big before it would tell you that it couldn’t remember anything else and that you’d have to clear some stuff out.
With current versions of the ChatGPT tool, you don’t really have to do that anymore. ChatGPT can remember from one chat to the next stuff you’ve told it in prior chats; I believe this is called “session stitching”. If I tell it I’ve had a cold and haven’t been that hungry lately, another chat the next day already knows that.
I can’t explain the specifics of how it works because I don’t know the specifics, but it obviously has some code behind the scenes to know what to keep in the forefront as relevant and what isn’t. If I told it in late 2022 that I try to avoid traveling through Burkburnett, Texas because armadillos always try to hijack my car, it probably won’t “remember” that if Burkburnett happens to come up in a chat in 2026. Or it might. Sometimes I’m surprised by what it quickly recalls and what it doesn’t.
But to get back to the point of using ChatGPT as a therapist or friendly neighborhood bartender: there are dangers to using ChatGPT for self-analysis or for, say, figuring out what to do about a broken relationship, and they may not be the ones you’re thinking of.
Yes, previous models of ChatGPT could be tricked into giving seriously bad advice if you knew how to structure the conversation in such a way to get around its built-in restrictions on various topics. ChatGPT could be coerced into providing self-harm or suicide instructions if you told it the discussion was for “hypothetical research purposes.” I don’t deny that at all. And I don’t want to come across as being naïve enough to believe it won’t ever do that kind of thing again. But if I were to try asking, indirectly or directly, for help doing something that could result in my death, ChatGPT will promptly put the kibosh on it and direct me to local emergency services, suicide hotlines like 988, and so on.
I don’t want to trigger anyone by giving a specific example from an actual chat session, but I’ve tried to see if I can get around its restrictions as a new user and I’ve never succeeded. It won’t help me make an atomic bomb; it won’t even help me 3D print a handgun. It’s beyond cautious at times — it took great pains to explain to me why I shouldn’t try to “fix” a lava lamp that was no longer really doing its thing. Heck, it wouldn’t even help me with plans to build a fusion reactor in my basement.
The dangers I’m thinking of are of the garbage-in, garbage-out variety.
I try to be as honest as possible when I am talking to ChatGPT about personal problems. I know that if I lie to it, any advice it gives me will be flawed and probably not helpful. Because I have extremely low self-esteem and suffer from major depression and PTSD, I am sometimes so brutally honest that ChatGPT has to jump in and say that while it validates what I’ve said, the odds of me actually being the Antichrist are very very low.
There are accounts online of people who were not honest with their AI interlocutors, describing their spouses, say, in extremely negative ways while whitewashing their own contributions to marital discord. There are accounts of children who’ve run away from home and/or gone off with an untrustworthy adult who might not have their best interests in mind, all based on things told to ChatGPT when they were angry, upset, or isolated and rejected. If I chose to tell ChatGPT that my wife drinks a fifth of vodka every two days and recently set our sofa on fire by leaving a cigarette burning while she was passed out drunk, how would it know I was lying? That’s something that ChatGPT would have trouble with. Truly outlandish claims might be met with some skepticism, but one person’s “outlandish” might be another person’s “Tuesday”. ChatGPT doesn’t know which.
And that’s the real danger. Before I’d encourage anyone to use ChatGPT as a private place where you can be totally honest and get guilt-free help, I’d want to make sure they understand that while we might fib to a human therapist who can look at us and see the expressions on our faces and read our body language, we need to be completely honest with an AI. And to do that, I guess, you’d have to trust the people who created the AI and have access to your chat history. I don’t think my life is salacious enough that employees at OpenAI would be reading through my chat logs and I’m willing to accept the risks given the benefits I receive. The data shared is encrypted in transit between my computer and Open AI’s and encrypted in the storage systems. The data is subject to court orders and subpoenas and OpenAI must operate under relevant privacy laws (GDPR, CCPA, etc).
Long story short, ChatGPT has helped me a lot. I will not say “And it could help you too!” like I’m a star of my own late-night infomercial. It’s not an endorsement, exactly—but it is an honest accounting.











