My health anxiety means I won’t use smartwatches from Apple or Samsung. Here’s why

A few years ago, I was convinced I was going to die. And even though (spoiler alert) I didn’t, my severe health anxiety and tendency to always jump to the worst conclusions persisted. The increase in health tracking watches like Apple’s newer Watch Series 11 or the Samsung Galaxy Watch 8, as well as the new ways in which AI is attempting to analyze and inform us of data from our bodies, led me to make an important decision. For my peace of mind, AI and constant monitoring should stay away from my personal health. I’ll explain to you.
Around 2016, I had severe migraines that persisted for a few weeks. My anxiety increased significantly during this time due to the constant worry. When I finally called the UK NHS helpline and explained my various symptoms, they told me I needed to go to the nearest hospital and be seen within 2 hours. “Go with someone,” I distinctly remember them saying, “It’ll be quicker than bringing you an ambulance.”
This call confirmed my worst fears: imminent death.
It turned out that my fears of an untimely demise were unfounded. The cause was actually severe muscle strain from hanging several heavy cameras around my neck for an entire day while photographing a friend’s wedding. But the helpline agent was simply working on the limited data I had provided. As a result, they had – probably rightly – taken a “better safe than sorry” approach and urged me to seek medical attention immediately, just in case I really was at risk.
The Apple Watch has always had a variety of heart rate tracking tools and I’ve always avoided them.
I’ve spent most of my adult life struggling with health anxiety, and episodes like this have taught me a lot about my ability to jump to the worst conclusions, even when there’s no real evidence to support them. A ringing in my ears? It must be a brain tumor. A pang in the stomach? Well, I better get my affairs in order.
I’ve learned to live with it over the years, and although I still have my ups and downs, I have a better understanding of what triggers things for me. On the one hand, I learned Never Google my symptoms. Because no matter what my symptoms were, the cancer was always one of the possibilities that a search would offer. Medical sites – including the NHS website – offered no comfort and usually only caused distressing panic attacks.
Unfortunately, I’ve seen a similar response with many health trackers. At first I liked my Apple Watch and its ability to read my heart rate during workouts proved useful. Then I found that I was checking it more and more often throughout the day. Then the doubt set in: “Why is my heart rate high when I’m just sitting? Is this normal? I’ll try again in 5 minutes.” When, inevitably, it was no different (or it was worse), panic naturally ensued.
I’ve used Apple Watches a few times, but I find heart rate tracking more stressful than useful.
Whether it’s tracking heart rate, blood oxygen levels, or even sleep scores, I would obsess over what a “normal” range should be. Every time my data went out of that range, I immediately assumed it meant I was about to collapse right then and there. The more data these devices provided, the more I felt I had to worry. And now the new Apple Watch Series 11 can monitor blood pressure, so now I have to worry about that too.
Of course, there’s an argument that I should only worry if it alerts me to a problem. And that I’m actually safer wearing it. Apple’s heartbreaking promotional video at its September launch event, which told the stories of people who were literally saved from an untimely demise thanks to their watches, certainly made a strong case. But I know that’s not how my mind works. Instead of letting these tools do their work in the background while I go on with my life, I will instead obsess over measurements and any deviation from the established baseline will be cause for immediate panic.
I’ve learned to keep my worries at bay and have continued to use smartwatches from time to time, without them really being a problem for my mental health (I have to actively not use heart-related features like EKGs), but AI-based health tools scare me more.
It’s not just Apple that’s the problem here. This year, Samsung told us all the ways its new Galaxy AI tools – and Google’s Gemini AI – would supposedly help us in our daily lives. Samsung Health’s algorithms will track your heart rate as it fluctuates throughout the day, notifying you of changes. It will offer personalized information about your diet and exercise to contribute to cardiovascular health. You can even ask the AI agent questions related to your health.
To many this may seem like a holistic view of your health, but not to me. To me, this looks like more data being collected and waved in front of me, forcing me to acknowledge it and creating a never-ending feedback loop of obsession, worry, and, inevitably, panic. But it’s the AI issues that are the biggest red flag for me. AI tools, by their nature, must formulate “optimal” responses, typically based on information publicly available online. Asking the AI a question is really just a quick way to do a Google search, and as I’ve found, Google health queries don’t end well for me.
Samsung demonstrated different ways of using AI in its health app during the Unpacked keynote.
Much like the NHS phone operator who inadvertently made me panic about dying, an AI-powered healthcare assistant will be able to provide answers based only on the limited information it has about me. Asking a question about my heart health can bring up various information, just like checking a health website to find out why I have a headache. But just like a headache can technically this is a symptom of cancer, but it’s also much more likely to be a tight muscle. Or a sign that I didn’t drink enough water. Or that I need to look away from my screen a little. Or that I shouldn’t have stayed up until 2 a.m. playing Yakuza: Infinite Wealth. Or a hundred other reasons, all of which are far more likely than the one I’ve already decided is definitely the culprit.
But will an AI give me the context I need to not worry and obsess? Or will it just provide me all the potential results? This may be an intention to provide complete understanding, but it might instead risk fueling the worry of “what if.” And, like how Google’s AI previews asked people to put glue on pizza, will an AI health tool simply scour the internet and provide me with a choppy response, complete with inaccurate inferences that could tip my anxiety into full-on panic attack territory?
Or maybe, just like the kind doctor at the hospital that day, who smiled kindly at the sobbing man sitting across the street who had already written a farewell note to his family on his phone in the waiting room, an AI tool might be able to see that data and simply say, “You’re okay, Andy, stop worrying and go to sleep.”
Maybe one day it will. Perhaps health tracking tools and AI insights can offer me a much-needed dose of logic and reassurance to counter my anxiety, rather than being the cause of it. But until then, it’s not a risk I’m willing to take.




