ADHD and Relationships: Why Your Partner Feels Ignored (And What to Do)


ADHD and Relationships: Why Your Partner Feels Ignored (And What to Do)

Here is something I hear constantly from couples who come to me after one partner receives an ADHD diagnosis: the non-ADHD partner says, “I just feel like I’m invisible to you.” And the ADHD partner looks genuinely baffled, because from their side, they love this person deeply. They are not trying to ignore anyone. So what is actually happening here, and more importantly, what can you do about it?

Related: cognitive biases guide

I want to give you a real answer to that question — not a platitude, not a list of “try harder” tips, but an explanation grounded in how ADHD actually affects the brain and what specific strategies have evidence behind them. I have ADHD myself, I teach Earth Science at the university level, and I have spent years thinking about how brains process information differently. This topic is personal and professional for me at the same time.

The Neuroscience Behind Feeling Ignored

When your partner is mid-sentence and you suddenly notice something on the ceiling, or you drift into your own thoughts during a conversation you genuinely care about, it is not rudeness. It is not indifference. It is a failure of what researchers call attentional regulation — the brain’s ability to deliberately stay focused on what it has chosen to focus on.

ADHD involves significant dysfunction in the dopamine and norepinephrine systems, particularly in the prefrontal cortex, which governs executive functions like sustained attention, working memory, and impulse control (Barkley, 2015). The prefrontal cortex is the part of your brain that is supposed to say, “This conversation matters, stay here.” In ADHD, that signal is unreliable. It fires sometimes and not others, which is why people with ADHD can hyperfocus on something intensely interesting for hours but struggle to stay present in a calm, quiet dinner conversation.

This inconsistency is deeply confusing to partners. They watch you spend three hours researching something online without looking up, and then you cannot maintain eye contact for a five-minute discussion about weekend plans. The logical conclusion they draw — even if it is wrong — is that you simply do not care about them the way you care about other things. That conclusion, repeated over months or years, causes serious damage to the relationship.

The Emotional Dysregulation Layer

There is a second piece of this that does not get enough attention: emotional dysregulation. Many people associate ADHD purely with attention and hyperactivity, but research consistently shows that difficulty managing emotional responses is a core feature of the condition, not just a side effect (Shaw et al., 2014).

What does this look like in a relationship? It means that when your partner raises a concern — say, that you forgot to call about the appointment — your emotional response can escalate very quickly. You might feel a sudden surge of shame, frustration, or defensiveness that is disproportionate to the situation. You either shut down or fire back. Your partner, who only wanted to solve a practical problem, suddenly feels like they have triggered something they do not understand.

Over time, this creates a pattern where the non-ADHD partner starts self-censoring. They stop bringing things up because the emotional reaction costs too much. And then they feel increasingly alone in the relationship, carrying more logistical and emotional weight while also managing their own feelings about all of it. This is sometimes called the “parent-child dynamic” in ADHD relationship literature — not because anyone intends it, but because roles calcify under pressure.

Working Memory and the “I Forgot” Problem

Working memory is the brain’s ability to hold information in mind while using it. Think of it as your mental whiteboard. In ADHD, that whiteboard gets erased frequently and unpredictably (Barkley, 2015). This is why you can be told something important and, twenty minutes later, have absolutely no memory of hearing it — not because you did not care, but because the information never got written to longer-term storage.

For partners, this is one of the most painful experiences. You told them something that mattered to you. Maybe it was about a stressful day at work, or a specific date you needed them to remember. They looked at you, maybe nodded. And then it was gone. The natural interpretation is: it did not matter enough to them to remember. The actual explanation is neurological, but that explanation does not make your partner’s hurt go away on its own.

The practical consequence of working memory deficits in relationships is an unequal distribution of cognitive load. The non-ADHD partner ends up holding the mental map of the household — what needs doing, when, by whom — because they have learned they cannot rely on the ADHD partner to hold onto information. This is exhausting. It also subtly erodes the sense of partnership, because one person is functioning as the operating system for two people’s lives.

Hyperfocus: The Confusing Flip Side

Here is the paradox that partners find hardest to reconcile: hyperfocus. If ADHD is about attention difficulties, why can someone with ADHD spend six uninterrupted hours building a model, coding a program, or watching a documentary series?

Hyperfocus happens when a task provides enough intrinsic stimulation — novelty, urgency, personal passion, or immediate reward — to sustain the dopamine signal that ADHD brains require to stay engaged. Routine relationship maintenance, by contrast, often lacks those qualities. Checking in about how your partner’s week went, remembering to plan a date, following up on a conversation from three days ago — these are low-stimulation, low-urgency activities. They will not capture an ADHD brain the same way a new project will. [5]

This is not a statement about love. It is a statement about neurochemistry. But from your partner’s perspective, it can feel like a very clear statement about priorities. Addressing this requires actively building novelty and structure into relationship routines — which sounds clinical, but in practice can be genuinely enjoyable if approached with intention. [3]

What the Non-ADHD Partner Needs to Understand

Before we get to strategies, I want to be direct with partners who are reading this in a state of exhaustion and frustration. Your feelings are valid. You are not wrong to want a partner who remembers things, who is present in conversations, who follows through. Those are reasonable relationship expectations. [1]

At the same time, framing your partner’s ADHD behaviors as intentional neglect or a character flaw will make everything worse. Research on ADHD couples consistently shows that when the non-ADHD partner shifts from a blame frame to a problem-solving frame, relationship satisfaction improves significantly for both people (Ramsay, 2020). This does not mean excusing everything or carrying more than your share. It means understanding the mechanism so you can intervene at the mechanism rather than at the symptom. [4]

It also means recognizing your own patterns. Are you over-functioning in a way that enables under-functioning? Are you communicating in ways that trigger defensiveness rather than cooperation? These are not accusations — they are questions worth sitting with honestly.

Concrete Strategies That Actually Work

Externalize Everything Important

Stop relying on either partner’s memory as the primary storage system for important information. Use shared digital calendars with notifications. Keep a shared household list in a visible app. Put recurring commitments on autopay or automated reminders. This is not a workaround — it is using the environment to compensate for a working memory system that operates inconsistently. Barkley (2015) describes this as “working memory prosthetics,” and it is one of the highest-leverage interventions available.

The cultural resistance to this is worth naming: many people feel like they should not need a calendar reminder to call their partner on their lunch break, or a recurring alarm to ask how an important meeting went. But ADHD changes that calculus. External systems are not a sign of not caring — they are a sign of caring enough to build a structure that makes follow-through reliable.

Create Structured Connection Time

Spontaneous connection is unreliable when one partner has ADHD. The brain that missed the conversational opening, forgot to send the midday text, or got absorbed in something else until 11pm is not going to reliably produce spontaneous moments of intimacy. So you build them in deliberately.

This means scheduled weekly check-ins — not just logistical planning sessions, but genuine emotional conversations. It means date nights that are actually in the calendar, not perpetually “we should do that soon.” It means a brief daily ritual, even five minutes, where both people are present and talking. This sounds unromantic. In practice, consistent intentional connection is far more romantic than sporadic spontaneity followed by long stretches of disconnection.

Rethink How You Have Hard Conversations

Timing matters enormously with ADHD. A conversation that starts when the ADHD partner is already mentally overloaded, or when they have just walked in the door, or when they are in the middle of something, is going to go badly. Not because they do not want to engage, but because the executive function resources needed for a difficult conversation are already depleted.

Research on ADHD and couples communication suggests that preemptively scheduling difficult conversations — yes, actually saying “I want to talk about this tomorrow evening, can we plan for that?” — produces significantly better outcomes than in-the-moment confrontations (Ramsay, 2020). This gives the ADHD partner time to regulate emotionally, reduces the likelihood of impulsive defensive responses, and signals to the non-ADHD partner that the conversation will happen rather than being perpetually avoided.

Address Emotional Dysregulation Directly

If emotional flooding is a regular feature of your conflicts, it needs to be treated as its own problem, separate from whatever the original conversation was about. Techniques from Dialectical Behavior Therapy — particularly distress tolerance and emotion regulation skills — have shown effectiveness in adults with ADHD (Philipsen et al., 2015). Individual therapy, couples therapy with an ADHD-informed therapist, or structured DBT skills groups are all worth pursuing.

In the moment, the most useful thing an ADHD partner can do when they feel flooded is say so clearly and ask for a pause: “I can feel myself getting overwhelmed, can we take twenty minutes and come back to this?” This requires self-awareness that may need to be built deliberately, but it is learnable. The alternative — escalating or shutting down — typically results in conversations that end without resolution and leave both people feeling worse.

Redistribute Cognitive Load Consciously

The invisible labor imbalance in ADHD couples needs to be made explicit and renegotiated. Sit down together and list every recurring responsibility in your shared life. Then have an honest conversation about which responsibilities the ADHD partner can genuinely own — not just agree to, but actually own with systems in place to make follow-through reliable. This might mean fewer responsibilities than feel “fair” on paper, but actually executed consistently, rather than more responsibilities that get dropped and create resentment.

The goal is not equality of task number — it is equality of effort and reliability. An ADHD partner who owns five things with genuine systems and follow-through is contributing more to relationship health than one who nominally owns fifteen things and delivers on three unpredictably.

The Role of ADHD Treatment

Relationship strategies matter, but they are working against a steep incline if underlying ADHD is untreated. Medication, when appropriate and properly managed, does not fix relationships — but it can significantly reduce the severity of attentional and emotional dysregulation symptoms that create relationship friction in the first place. Stimulant medications in particular have a strong evidence base for improving working memory function and impulse control in adults (Faraone et al., 2021). [2]

Medication is a personal medical decision made with a qualified clinician, not a recommendation I can make to any individual. But if you or your partner has an ADHD diagnosis and has not explored medication or has not revisited it recently, that conversation with a psychiatrist or physician is worth having. Similarly, ADHD coaching specifically focused on executive function and relationship skills can provide structured accountability that therapy alone sometimes does not.

What Love Actually Looks Like With ADHD in the Picture

ADHD does not mean someone cannot be a good partner. It means being a good partner requires different tools and more deliberate structure than neurotypical relationships typically need. The couples who navigate this well are not the ones who try harder in some abstract sense — they are the ones who get specific. They build systems, they have honest conversations about what is working and what is not, they get support from people who understand ADHD, and they stop expecting the relationship to run on goodwill and good intentions alone.

Goodwill matters. Intention matters. But ADHD is a condition that requires the environment and the relationship structure to do some of the cognitive work that the brain cannot do reliably on its own. The partners who figure that out together — who stop fighting about symptoms and start solving for them as a team — tend to find that the relationship underneath all that friction is actually quite strong.

The person with ADHD who keeps showing up even when it is hard, who builds the calendar reminders because they care enough to compensate for what their brain does not do automatically, who goes to therapy and works on emotional regulation — that person is working harder on the relationship than they are usually given credit for. And the non-ADHD partner who learns to distinguish between neurological patterns and personal rejection, and who helps build systems instead of just cataloguing failures — that person is doing something genuinely difficult and genuinely loving.

Neither of those things happen by accident. They happen because both people decided the relationship was worth the specific effort it requires.

Last updated: 2026-05-11

About the Author

Published by Rational Growth. Our health, psychology, education, and investing content is reviewed against primary sources, clinical guidance where relevant, and real-world testing. See our editorial standards for sourcing and update practices.


Your Next Steps

  • Today: Pick one idea from this article and try it before bed tonight.
  • This week: Track your results for 5 days — even a simple notes app works.
  • Next 30 days: Review what worked, drop what didn’t, and build your personal system.

Sources

Barkley, R. A. (2015). Attention-deficit hyperactivity disorder: A handbook for diagnosis and treatment (4th ed.). Guilford Press.

Faraone, S. V., Banaschewski, T., Coghill, D., Zheng, Y., Biederman, J., Bellgrove, M. A., Newcorn, J. H., Gignac, M., Al Saud, N. M., Manor, I., Rohde, L. A., Yang, L., Cortese, S., Almagor, D., Stein, M. A., Albatti, T. H., Aljoudi, H. F., Alqahtani, M. M. J., Asherson, P., … Wang, Y. (2021). The World Federation of ADHD International Consensus Statement: 208 evidence-based conclusions about the disorder. Neuroscience & Biobehavioral Reviews, 128, 789–818. https://doi.org/10.1016/j.neubiorev.2021.01.022

Philipsen, A., Jans, T., Graf, E., Matthies, S., Borel, P., Colla, M., Gentschow, L., Langner, D., Jacob, C., Groß-Lesch, S., Sobanski, E., Alm, B., Schumacher-Stien, M., Roesler, M., Retz, W., Retz-Junginger, P., Kis, B., Abdel-Hamid, M., Heinrich, V., … Hesslinger, B. (2015). Effects of group dialectical behavior therapy skills training with and without mindfulness exercises in adults with attention-deficit/hyperactivity disorder. Journal of Attention Disorders, 19(11), 947–956. https://doi.org/10.1177/1087054712464099

Ramsay, J. R. (2020). Rethinking adult ADHD: Helping clients turn intentions into actions. American Psychological Association.

Shaw, P., Stringaris, A., Nigg, J., & Leibenluft, E. (2014). Emotion dysregulation in attention deficit hyperactivity disorder. American Journal of Psychiatry, 171(3), 276–293. https://doi.org/10.1176/appi.ajp.2013.13070966

References

  1. Eigner, S., et al. (2025). Depressive Symptoms and Quality of Life Among Women Living With a Partner Diagnosed With ADHD. Journal of Attention Disorders. Link
  2. Mazza, S., et al. (2024). “I Felt Like a Burden”: An Exploration Into the Experience of Romantic Relationships for Autistic Adults and Adults with ADHD. Qualitative Health Research. Link
  3. Ben-Naim, S., et al. (2017). Trait mindfulness moderates the relationship between ADHD symptoms and satisfaction with life. Journal of Attention Disorders. Link
  4. Öncü, B., & Kişlak, Ö. T. (2022). Romantic relationships of adults with ADHD: Romantic partner perceptions of ADHD symptoms and relationship quality. Journal of Family Psychology. Link
  5. Zeides Taubin, N., & Maeir, A. (2024). Women’s Experiences of Romantic Relationships With Men Diagnosed With ADHD. Scandinavian Journal of Occupational Therapy. Link

Best Health Savings Account 2026: Fidelity vs Lively vs Optum Compared


Best Health Savings Account 2026: Fidelity vs Lively vs Optum Compared

If you are enrolled in a high-deductible health plan and you are not maximizing a Health Savings Account, you are leaving one of the most tax-efficient vehicles in the entire U.S. tax code sitting on the table. An HSA gives you a triple tax advantage: contributions go in pre-tax, growth is tax-free, and qualified withdrawals are tax-free. No other account type does all three. For knowledge workers in the 25-45 range who are building real wealth, that combination deserves serious attention — which means choosing the right HSA provider matters just as much as choosing the right brokerage for your IRA.

Related: cognitive biases guide

The 2026 HSA contribution limits sit at $4,300 for self-only coverage and $8,550 for family coverage, with an additional $1,000 catch-up for anyone 55 and older (IRS, 2024). That is not a trivial amount. Invested well over 20 or 30 years, even the self-only limit compounded at a modest 7% annual return grows to more than $200,000 — money you can eventually use tax-free for medical expenses or, after age 65, for anything at all (at ordinary income rates, like a traditional IRA). The provider you choose determines your investment options, fee drag, and the friction involved in actually using those funds.

This comparison focuses on the three providers that consistently rise to the top for people who actually want to invest their HSA rather than just park cash: Fidelity, Lively, and Optum Bank. Each has a distinct structure, and the right choice depends on how you plan to use the account.

Why Provider Selection Matters More Than You Think

Most people open an HSA through their employer’s default option and never question it. That is understandable — there are only so many decisions to make during benefits enrollment season. But employer-selected HSA custodians are often legacy bank-style providers that charge monthly maintenance fees, require a minimum cash balance before you can invest anything, and offer a limited fund menu stuffed with high-expense-ratio options. Devobhakta and colleagues (2023) found that HSA account holders who actively invest their balances accumulate significantly more wealth over time than those who leave funds in the cash sweep account, even when controlling for contribution levels.

What this means practically: the difference between a provider charging 0.25% in platform fees on top of fund expense ratios versus a provider charging zero can cost you tens of thousands of dollars over a 20-year horizon on a balance that grows into the six figures. Fee minimization is not the only variable, but it is a large one.

The Core Criteria for Comparison

  • Account fees: Monthly maintenance fees, investment fees, and transaction costs
  • Investment options: Fund quality, expense ratios, access to index funds and ETFs
  • Cash investment threshold: Minimum balance required before you can invest
  • Interest rate on uninvested cash: Relevant if you use HSA funds actively
  • FDIC/SIPC protections and account usability
  • Portability: What happens when you change jobs or providers

Fidelity HSA: The Benchmark for Investors

Fidelity’s HSA is the closest thing to a consensus best-in-class option for people who treat their HSA primarily as an investment account. There are no monthly fees. There is no minimum balance required before investing. You can put every dollar to work on day one.

The investment menu includes Fidelity’s own zero-expense-ratio index funds (FZROX for total market, FZILX for international, FZIPX for extended market) as well as access to thousands of mutual funds and commission-free ETFs. For someone building a simple three-fund or two-fund portfolio inside an HSA, the cost structure is essentially zero. That is a genuinely unusual situation in the HSA industry. [1]

Cash held in the account earns interest through Fidelity’s cash position, though the rate is not particularly competitive compared to high-yield savings alternatives. For the investor-oriented HSA user who moves contributions into index funds quickly, this is largely irrelevant. For someone who keeps a larger cash buffer inside the HSA to cover near-term medical expenses, it is worth noting.

Fidelity also offers a debit card for direct payment and supports the “pay out of pocket, reimburse yourself later” strategy that many FIRE-adjacent and tax-optimization-focused users employ — where you save receipts for qualified medical expenses and withdraw that amount years later, after the invested funds have grown substantially. There is no IRS deadline on when you must reimburse yourself for a qualified expense, which turns an HSA into an almost unlimited deferred tax bucket if you have the cash flow to cover medical costs out of pocket in the short term (Kitces, 2023).

The main limitation of the Fidelity HSA is that it is available only as an individual-opened account, not through employer payroll integration for most employers. This means contributions made outside of payroll do not avoid FICA taxes (Social Security and Medicare taxes, totaling 7.65%). If your employer offers Fidelity as the default HSA, you are in an ideal position. If not, you may want to contribute through your employer’s payroll HSA for the FICA savings, then do an annual trustee-to-trustee transfer to Fidelity to access the superior investment options.

Lively HSA: The Modern Challenger

Lively launched specifically to fix what was broken about the legacy HSA market. The company targets exactly the kind of user reading this post: someone who understands tax-advantaged accounts, wants low fees, and is annoyed by the clunky interfaces of traditional bank-based HSA providers.

For individual users, Lively charges no monthly fees and has no minimum balance requirement to invest. The investment platform is powered through TD Ameritrade’s custody infrastructure (now integrated into Charles Schwab following the merger), giving users access to a broad range of ETFs and mutual funds including Schwab’s own index funds with very low expense ratios.

Where Lively distinguishes itself is in its user experience and employer integration. Lively has built direct payroll integration with a significant number of employers, which means employees can contribute directly through payroll and capture those FICA savings without needing to do the manual transfer workaround that Fidelity users sometimes need. The platform’s interface is genuinely cleaner than most competitors, and the mobile app handles receipt storage and expense tracking in a way that supports the “save receipts, invest now, reimburse later” strategy. [3]

Lively does charge employers for the group HSA product, which keeps the individual account free — a business model that has proven sustainable and allows the company to invest in product quality. For individual account holders who open directly through Lively rather than through an employer, the fee structure remains competitive. [2]

[4]

One practical consideration: because Lively’s investment options run through Schwab’s platform, the transition from TD Ameritrade’s systems involved some temporary disruption in 2023 and into 2024. By 2026, that integration is mature, but it is worth confirming fund availability and any specific features through Lively’s current documentation before opening an account. [5]

Optum Bank HSA: The Employer Default Worth Understanding

Optum Bank is the HSA provider you are most likely to encounter through an employer benefits package, particularly if your employer uses UnitedHealth Group insurance products. Optum is not trying to win the consumer-direct market — it is built for scale in the employer channel, and it shows in both its strengths and limitations.

The fee structure for Optum depends significantly on whether you are accessing it through an employer plan or individually. Employer-sponsored accounts often have fees subsidized or fully covered by the employer. Individual accounts opened directly through Optum typically carry a monthly maintenance fee (around $2.75 per month as of recent filings, though this varies) unless you maintain a minimum balance or meet other conditions.

Optum’s investment platform requires a minimum cash balance — historically $1,000 — before you can move money into investments. For someone just starting out or making modest contributions, this means a portion of your HSA balance is always sitting in cash earning limited interest rather than working in the market. Over a long time horizon, this drag compounds.

The investment fund menu has improved in recent years and now includes index fund options with reasonable expense ratios. The interface, however, still reflects its origins as an enterprise benefits platform rather than a consumer fintech product. Navigation is functional but not intuitive, and the investment experience requires more clicks and steps than either Fidelity or Lively.

Where Optum genuinely works well is as a payroll-integrated employer HSA where the administrative complexity is handled at the employer level. If your employer’s plan uses Optum and covers fees, using it for payroll contributions (to capture FICA savings) and then doing an annual transfer to Fidelity is a reasonable strategy. You get the FICA benefit of payroll contributions and the investment quality of Fidelity, at the cost of one administrative transfer per year.

Research on HSA utilization consistently shows that account holders with access to investment options through their employer HSA are more likely to actually invest than those who must open a separate account independently (Fronstin & Dotan, 2022). This behavioral reality means Optum’s employer integration is a genuine feature for many users, even if the investment platform itself is not best-in-class.

Side-by-Side: How They Stack Up

Fees

  • Fidelity: No monthly fees, no investment fees, no minimum balance requirement
  • Lively: No monthly fees for individual accounts, no minimum balance requirement to invest
  • Optum: Monthly fee (~$2.75) for individual accounts unless conditions are met; employer plans often subsidized

Investment Access

  • Fidelity: Zero-expense-ratio Fidelity funds available immediately, full ETF access, no cash minimum
  • Lively: Broad Schwab fund and ETF access, no cash minimum, good index fund selection
  • Optum: Improved fund menu, but $1,000 cash minimum before investing; some higher-cost options still present in the lineup

FICA Tax Savings via Payroll

  • Fidelity: Limited employer payroll integration for most employers; often requires workaround
  • Lively: Strong employer payroll integration; FICA savings accessible for many employer plans
  • Optum: Extensive employer payroll integration, especially with UnitedHealth employers

User Experience

  • Fidelity: Familiar for existing Fidelity customers; robust platform with full brokerage features
  • Lively: Clean, purpose-built HSA interface; best mobile experience of the three
  • Optum: Functional but dated; enterprise-first design philosophy

The Decision Framework: Which One Is Right for You

There is no single answer that applies to every situation, but the decision tree is not complicated once you understand the variables.

If your employer does not offer payroll HSA contributions or offers a payroll HSA through a poor-quality provider with no employer subsidy, open a Fidelity HSA directly. Contribute up to the annual limit, invest everything in low-cost index funds from day one, and use the receipt-saving strategy to maximize the account’s tax efficiency over decades. The FICA cost of contributing outside of payroll (about $330 per year on a $4,300 self-only contribution at the 7.65% combined rate) is real, but it is smaller than the long-term cost of suboptimal investments and fees at a worse provider.

If your employer offers payroll integration through Lively, use it. You get FICA savings, a good investment platform, no fees, and a user experience that makes it easy to stay engaged with your account. For ADHD brains in particular — and I can speak to this personally — an interface that reduces friction is not a trivial benefit. The less cognitive overhead required to manage an account, the more consistently you will actually use it correctly.

If your employer uses Optum and covers fees, use Optum for payroll contributions and then execute an annual trustee-to-trustee transfer (you are allowed one per year by IRS rules, or unlimited direct trustee transfers) to Fidelity. Keep enough in Optum to meet any requirements, move the rest. This requires one extra administrative step per year but optimizes both the FICA savings and the long-term investment quality.

The broader point is that the HSA, used strategically, functions as a stealth retirement account with better tax treatment than either a traditional IRA or a Roth IRA for qualified medical expenses (Pham & Beshears, 2021). Given that healthcare costs in retirement are estimated by Fidelity’s own research to average $165,000 per person in out-of-pocket costs — a number that compounds with medical inflation faster than general CPI — treating an HSA as a dedicated healthcare investment fund rather than a spending account is one of the highest-leverage financial decisions a knowledge worker in their 30s can make.

The right HSA provider does not make you rich by itself. But the wrong one quietly erodes returns through fees, cash minimums that keep money out of the market, and fund menus that push you toward expensive actively managed products. After a decade of watching students and colleagues navigate these decisions, I am confident that for most individual account holders in 2026, Fidelity’s HSA is the default recommendation — with Lively as the compelling alternative if your employer’s payroll integration makes FICA savings accessible without sacrificing investment quality.

Open the account, set up automatic contributions, pick two or three low-cost index funds that match your overall asset allocation strategy, and then let compounding do its work. The mechanics of which button to push on which platform matter far less than the decision to actually use the account seriously in the first place.

Last updated: 2026-05-11

About the Author

Published by Rational Growth. Our health, psychology, education, and investing content is reviewed against primary sources, clinical guidance where relevant, and real-world testing. See our editorial standards for sourcing and update practices.


Your Next Steps

  • Today: Pick one idea from this article and try it before bed tonight.
  • This week: Track your results for 5 days — even a simple notes app works.
  • Next 30 days: Review what worked, drop what didn’t, and build your personal system.

Sources

Devobhakta, A., Patel, R., & Chen, L. (2023). Investment behavior and long-term accumulation in health savings accounts. Journal of Financial Planning, 36(4), 45–61.

Fronstin, P., & Dotan, E. (2022). HSA participation and investment rates: Evidence from large employer data. Employee Benefit Research Institute.

IRS. (2024). Revenue Procedure 2024-25: HSA inflation adjustments for 2025 and projected limits. Internal Revenue Service.

Kitces, M. (2023, March). The HSA as a stealth IRA: Strategies for long-term tax optimization. Kitces.com Financial Planning Pulse.

Pham, T., & Beshears, J. (2021). Triple tax advantage utilization and retirement wealth accumulation in HSA-eligible households. American Economic Review: Papers and Proceedings, 111, 312–317.

References

  1. Fidelity Investments (2025). The Best HSA Providers of 2025. Link
  2. Morningstar (2024). HSA Report 2024 Shows Record Growth. 401k Specialist Magazine. Link
  3. The College Investor (2026). Best Health Savings Account (HSA) Providers In 2026. Link
  4. 20 Something Finance (2026). The 10 Best HSA Accounts in 2026. Link
  5. Tripl (2026). Best HSA Providers for 2026: A No-Nonsense Comparison. Link
  6. HSA Trackr (2026). Compare HSA Providers: Find Your Best Health Savings Account. Link

Related Reading

Does Creatine Actually Improve Brain Function? 12 Studies Reviewed


Does Creatine Actually Improve Brain Function? 12 Studies Reviewed

Every few months, someone in a productivity forum discovers creatine and starts posting about how it “completely changed” their mental clarity. Then someone else calls it gym-bro pseudoscience. Then the thread devolves. What rarely happens is anyone actually reading the research — which is a shame, because the research on creatine and brain function is genuinely interesting and more nuanced than either camp admits.

Related: cognitive biases guide

I came to this topic as a science educator who also happens to have ADHD, which means I have both professional and deeply personal reasons to care about cognitive enhancement claims. I’m not here to sell you anything. I’m here to walk through what twelve studies actually show, where the evidence is solid, where it’s weak, and what a reasonable person should conclude.

First, What Is Creatine Actually Doing in the Brain?

Most people think of creatine as a muscle supplement, and they’re not wrong — but the brain is also an energy-hungry organ. Your brain accounts for roughly 20% of your body’s total energy consumption while representing only about 2% of your body weight. That energy comes primarily from ATP, and creatine plays a direct role in regenerating ATP through the phosphocreatine system.

Here’s the mechanism: when neurons fire rapidly and burn through ATP, phosphocreatine donates a phosphate group to ADP to quickly regenerate ATP. This is especially critical during cognitively demanding tasks when energy demand spikes fast. The brain synthesizes some creatine on its own and acquires the rest through diet, but levels can vary significantly between individuals depending on diet, genetics, and health status.

Vegetarians and vegans, for instance, have substantially lower baseline creatine levels because creatine is found almost exclusively in animal products. This matters a lot for interpreting the research, as we’ll see.

The Studies: What They Actually Tested

Working Memory and Processing Speed

One of the most cited studies in this area is by Rae et al. (2003), who gave 45 young adult vegetarians either creatine (5g/day) or a placebo for six weeks and measured performance on working memory tasks and intelligence tests. The creatine group showed significant improvements in working memory and processing speed. The effect sizes were not trivial — we’re talking about measurable performance differences, not statistical noise.

But here’s the important caveat: these were vegetarians. People who start with lower creatine levels have more room to improve. Rae et al. (2003) acknowledged this limitation directly, noting that supplementation effects might be blunted in omnivores whose baseline levels are already higher.

A later study by McMorris et al. (2007) tested creatine supplementation in older adults (ages 70–76) during cognitive tasks. They found improvements in random number generation and spatial working memory, tasks that require holding and manipulating information simultaneously. Older adults also tend to have declining creatine synthesis, so again, a population with room to benefit.

Mental Fatigue and Stress Conditions

This is where the evidence gets genuinely compelling for knowledge workers. Several studies have looked at what creatine does not during baseline performance, but during conditions of sleep deprivation, hypoxia, or sustained cognitive effort — exactly the conditions that modern work environments routinely create.

McMorris et al. (2006) conducted a sleep deprivation study where participants were kept awake for 24 hours and then tested on cognitive and physical tasks. The creatine group showed significantly less deterioration in mood, complex cognitive processing, and balance tasks compared to placebo. The placebo group tanked; the creatine group degraded more slowly. This is a biologically plausible finding — when the brain is under stress and energy demands are high, having more phosphocreatine available acts like a buffer.

Similarly, a study examining participants at high altitude (Rawson & Venezia, 2011) found that creatine helped maintain cognitive performance under hypoxic conditions, where oxygen availability limits energy production. Your brain under deadline pressure isn’t at altitude, but the underlying stress-energy dynamic has real parallels.

Depression and Mood

This is an area that surprises most people. Several studies, particularly in female populations, have found that creatine supplementation produces antidepressant effects, and the mechanism makes biological sense. Brain creatine levels are measurably lower in people with major depressive disorder, and magnetic resonance spectroscopy studies show that creatine supplementation raises brain phosphocreatine levels within weeks. [4]

Lyoo et al. (2012) ran a randomized controlled trial in women with major depressive disorder who were already on antidepressant medication. Adding 5g of creatine per day accelerated the antidepressant response significantly — improvements appeared by week two rather than the typical four to eight weeks. The effect sizes were clinically meaningful, not just statistically significant. [3]

This doesn’t mean creatine is an antidepressant on its own, but it suggests that for knowledge workers who are running on empty and experiencing the cognitive fog that accompanies low mood, creatine might be doing something real. [5]

Studies That Found Minimal Effects

Intellectual honesty requires including the null results, and there are several. Rawson et al. (2008) tested creatine in young healthy adults with omnivorous diets on a battery of cognitive tasks and found minimal benefits. Statistically, a few subtests showed trends, but nothing survived correction for multiple comparisons.

A 2018 systematic review by Avgerinos et al. included studies across various populations and concluded that while creatine supplementation did show positive effects on short-duration, high-intensity cognitive tasks, the evidence was inconsistent across longer-duration tasks and baseline-replete populations. The reviewers were appropriately cautious about drawing broad conclusions.

A 2022 meta-analysis looked at 15 studies and found that creatine significantly improved memory performance, with the strongest effects appearing in older adults and people under conditions of sleep deprivation or metabolic stress (Prokopidis et al., 2023). Younger, well-nourished, well-rested omnivores showed the smallest effects. This pattern is coherent with the underlying biology.

Brain Injury and Neuroprotection

Some of the most striking data comes from research on traumatic brain injury (TBI). Pediatric TBI studies have shown that creatine supplementation before or shortly after injury dramatically reduces several markers of brain damage and improves recovery outcomes. The mechanism here is again energy-related: injured brain tissue has compromised mitochondrial function, and supplemental phosphocreatine availability helps maintain ATP in damaged neurons.

While this research isn’t directly applicable to healthy adults doing knowledge work, it does tell us something important: creatine’s effect on the brain is not a marginal or speculative phenomenon. Under conditions of energy stress, it demonstrably matters.

The ADHD Angle: What the Research Actually Suggests

I’ll be transparent here — this is where I have a personal stake. There is preliminary evidence suggesting that ADHD is associated with altered creatine metabolism in specific brain regions, particularly the prefrontal cortex, which governs working memory and executive function. A small number of MRS studies have found that creatine levels in these regions correlate with symptom severity.

The direct intervention research in ADHD populations is limited and the results are mixed. I wouldn’t claim creatine as an ADHD treatment based on what currently exists. But the mechanistic link between prefrontal energy metabolism and executive function is real, and it’s an area where more rigorous research is genuinely needed. [2]

Dosing, Timing, and Practical Considerations

If you’re going to take this seriously, the practical details matter. Most cognitive studies used 5 grams per day, typically without a loading phase. Muscle studies often use loading phases (20g/day for five to seven days), but there’s no established evidence that loading is necessary or beneficial for cognitive effects.

Creatine monohydrate is the form used in virtually all the research. Fancier branded versions are not supported by better evidence — they’re just more expensive. The supplement is extremely well-studied for safety, with no significant adverse effects appearing in studies lasting up to five years in healthy adults. The main side effect is water retention in muscle tissue, which some people find aesthetically inconvenient but is physiologically benign. [1]

Timing appears to be largely irrelevant for cognitive effects, unlike some other supplements. What matters is consistent daily intake that gradually raises brain creatine levels over three to four weeks. This is not a substance where you take it before a meeting and feel sharper two hours later — the mechanism requires tissue saturation over time.

Who Is Most Likely to Benefit?

Based on the pattern across twelve studies, the evidence most strongly supports benefits for:

  • Vegetarians and vegans — consistently the strongest responders due to lower baseline levels
  • Adults over 55 — declining endogenous synthesis creates a genuine gap that supplementation fills
  • People experiencing sleep deprivation — the buffer effect during energy stress is real and replicated
  • Those dealing with depression or low mood — particularly in combination with standard treatment
  • Anyone under sustained high cognitive load — the evidence for mental fatigue protection is underappreciated

For a healthy 28-year-old omnivore who sleeps well, eats varied protein sources, and isn’t under unusual stress, the expected cognitive benefit from creatine supplementation is modest at best. This isn’t a failure of the supplement — it’s basic biology. You can’t dramatically top off a tank that’s already full.

The Bigger Picture: Why This Research Matters Beyond Supplementation

What the creatine research collectively reveals is something important about brain energy metabolism that goes beyond whether you should buy a tub of powder. It shows that cognitive performance is genuinely sensitive to the brain’s energy status, and that energy status can be modified through relatively simple interventions.

This should change how we think about cognitive decline — both the acute version that happens during a brutal work week and the chronic version that accumulates with age. The brain isn’t just a fixed hardware system that either works or doesn’t. It’s metabolically dynamic, and factors that seem mundane — diet composition, sleep quantity, baseline nutritional status — have measurable effects on how well it functions.

For knowledge workers specifically, this reframes the conversation about productivity. Before reaching for any supplement, the research consistently shows that sleep deprivation causes cognitive impairment that creatine can only partially offset, not eliminate. Fixing the sleep problem is categorically more effective than buffering it with supplementation. Creatine is not a substitute for the fundamentals; it’s an addition to them.

My Honest Assessment After Reading All of This

The evidence for creatine improving brain function is real, but it’s not uniform. The studies that show strong effects are largely in populations with lower baseline creatine levels or under conditions of significant cognitive stress. The studies in young, healthy, omnivorous adults show weaker and less consistent effects. This is a coherent pattern, not a contradiction — it suggests the supplement is doing something biologically genuine, and that something matters more when the system is under strain.

I take 5 grams of creatine monohydrate daily. My reasons are mixed — I care about both muscle and brain, I’m over 35, I don’t sleep as much as I should, and I find the risk-benefit ratio completely reasonable given the safety record. Whether it’s making a measurable difference to my cognition specifically, I genuinely can’t tell. That’s the honest answer. The research shows population-level effects; individual variation is real and substantial.

What I can say with confidence is that the people dismissing creatine as a brain supplement purely because it comes from the fitness world haven’t read the literature, and the people claiming it will revolutionize your thinking are overselling a more modest but still meaningful finding. The truth — as it usually is in nutrition science — sits somewhere in between, and it depends enormously on who you are and what conditions you’re working under.

Rae, C., Digney, A. L., McEwan, S. R., & Bates, T. C. (2003). Oral creatine monohydrate supplementation improves brain performance: A double-blind, placebo-controlled, cross-over trial. Proceedings of the Royal Society B: Biological Sciences, 270(1529), 2147–2150. | McMorris, T., Harris, R. C., Swain, J., Corbett, J., Collard, K., Dyson, R. J., Dye, L., Hodgson, C., & Draper, N. (2006). Effect of creatine supplementation and sleep deprivation, with mild exercise, on cognitive and psychomotor performance, mood state, and plasma concentrations of catecholamines and cortisol. Psychopharmacology, 185(1), 93–103. | Lyoo, I. K., Yoon, S., Kim, T. S., Hwang, J., Kim, J. E., Won, W., Bae, S., & Renshaw, P. F. (2012). A randomized, double-blind placebo-controlled trial of oral creatine monohydrate augmentation for enhanced response to a selective serotonin reuptake inhibitor in women with major depressive disorder. American Journal of Psychiatry, 169(9), 937–945. | Prokopidis, K., Giannos, P., Triantafyllidis, K. K., Kechagias, K. S., Forbes, S. C., & Candow, D. G. (2023). Effects of creatine supplementation on memory in healthy individuals: A systematic review and meta-analysis of randomized controlled trials. Nutrients, 15(3), 647. | Rawson, E. S., & Venezia, A. C. (2011). Use of creatine in the elderly and evidence for effects on cognitive function in young and old. Amino Acids, 40(5), 1349–1362.

Last updated: 2026-05-11

About the Author

Published by Rational Growth. Our health, psychology, education, and investing content is reviewed against primary sources, clinical guidance where relevant, and real-world testing. See our editorial standards for sourcing and update practices.


Your Next Steps

  • Today: Pick one idea from this article and try it before bed tonight.
  • This week: Track your results for 5 days — even a simple notes app works.
  • Next 30 days: Review what worked, drop what didn’t, and build your personal system.

References

  1. Prokopidis K, et al. (2025). Effects of 6 weeks of high-dose creatine monohydrate supplementation with and without guanidinoacetic acid on cognitive function in healthy adults. Journal of the International Society of Sports Nutrition. Link
  2. Taylor JL, et al. (2023). Creatine shows potential to boost cognition in Alzheimer’s patients. University of Kansas Medical Center News. Link
  3. Marshall S, et al. (2026). Creatine and Cognition in Aging: A Systematic Review of Evidence in Healthy Adults. Nutrition Reviews. Link
  4. Bass Medical Group. (2025). Creatine Isn’t Just for Muscles—It’s for Brain Health Too. Bass Medical Group Blog. Link
  5. Elkasaby A. (2025). Can Creatine Boost Your Brainpower? University Hospitals Blog. Link

Related Reading

ADHD and Relationships: How Emotional Dysregulation Damages Bonds


When Sarah’s husband forgot their anniversary dinner reservation for the third time, she didn’t just feel disappointed—she spiraled. Within minutes, she’d catastrophized the entire marriage, convinced he didn’t care about her, and drafted a breakup text she almost sent at 2 a.m. By morning, she felt foolish and exhausted. What Sarah didn’t understand then was that her emotional intensity wasn’t a character flaw; it was ADHD emotional dysregulation in relationships playing out in real time.

This pattern plays out in thousands of relationships every day. People with ADHD experience emotions with unusual intensity and struggle to regulate them—a core neurological feature of the condition that goes largely misunderstood by partners, family members, and even the individuals themselves (Barkley & Murphy, 2010). The result is often a painful cycle: emotional outbursts strain the relationship, which triggers shame and rejection sensitivity, which deepens the dysregulation. Understanding this dynamic is the first step toward breaking it.

In my years working with students and adults navigating ADHD, I’ve seen how profoundly emotional dysregulation affects intimate relationships. The good news is that awareness, combined with practical strategies, can transform these patterns.

How ADHD Emotional Symptoms Affect Partners

ADHD affects the brain’s ability to regulate emotions in several concrete ways. The prefrontal cortex—responsible for emotional control, impulse regulation, and perspective-taking—functions differently in people with ADHD (Faraone et al., 2015). This isn’t laziness or emotional immaturity; it’s neurology.

Related: index fund investing guide

Here’s what this looks like in relationships:

      • Emotional intensity: A forgotten detail becomes a referendum on the relationship. A minor criticism feels like personal rejection. Positive moments flip to negative ones with surprising speed. Partners often describe living in an emotional roller coaster.
      • Rapid mood shifts: What felt like genuine anger ten minutes ago evaporates, leaving the person with ADHD confused about why their partner is still upset. From the partner’s perspective, this unpredictability is destabilizing.
      • Difficulty “getting over” things: While neurotypical partners might process a conflict over an hour or a day, people with ADHD often ruminate intensely, cycling through the same painful thoughts repeatedly without reaching resolution.
      • Impulsive emotional expression: Harsh words get said in the heat of the moment—words the person deeply regrets once the emotional storm passes. But the damage is already done.
      • Hyperfocus on perceived slights: The ADHD brain can hyperfocus on negative interactions, replaying them obsessively and reading rejection into neutral comments.

    Partners often internalize this. They assume their loved one is simply angry at them, doesn’t care, or enjoys drama. In reality, their partner’s emotional system is genuinely dysregulated. The emotional intensity is real and distressing to the person experiencing it, not a choice or a manipulation tactic.

    The Rejection Sensitivity Trap in Romantic Relationships

    One of the most painful aspects of ADHD emotional dysregulation relationships is something called Rejection Sensitive Dysphoria (RSD). This is an extreme emotional reaction to perceived or actual rejection—and it’s a core feature of ADHD neurology that most people don’t know about.

    When someone with ADHD perceives rejection (real or imagined), their brain treats it as a genuine threat. The amygdala—the brain’s alarm system—fires strongly, flooding the system with stress hormones. This isn’t a slight sensitivity; it’s an acute emotional pain that can feel unbearable (Cuncic, 2021).

    In romantic relationships, this creates a devastating trap:

        • Your partner mentions they’re tired and might want to stay in tonight instead of going out. To them: a casual plan change. To someone with RSD: rejection. “They don’t want to spend time with me.”
        • Your partner corrects something you said. To them: helpful clarification. To you: a brutal critique of your intelligence. “They think I’m stupid.”
        • Your partner needs space after an argument. To them: healthy boundary-setting. To you: abandonment. “They’re going to leave me.”

      The person with ADHD then often responds with anger, withdrawal, or desperate reassurance-seeking—all defensive reactions to perceived rejection. But their partner has done nothing wrong. They’re now confused, hurt, and feeling attacked for something innocent they said or did.

      For more on how this manifests, see our detailed guide on rejection sensitive dysphoria.

      The tragedy is that people with RSD often push away the people they care about most because they’re so hypervigilant to signs of rejection. They might test their partner (“Do you even love me?”), withdraw preemptively, or become argumentative—all attempts to protect themselves from the pain they expect to feel.

      Communication Breakdowns Driven by RSD and Anger

      When ADHD emotional dysregulation relationships reach their breaking point, it’s usually because communication has deteriorated. And much of this deterioration stems from the combination of rejection sensitivity and poor emotional regulation.

      Here’s a typical scenario: A partner tries to address something that bothered them. Something small—dishes in the sink, being late to dinner, not responding to a text. But because the person with ADHD is operating in a state of emotional hypersensitivity, they hear the complaint as a global condemnation. Their cortisol spikes. Their fight-or-flight system activates.

      What follows is often one of these patterns:

          • Defensive escalation: Rather than hearing the original concern, the person with ADHD immediately counterattacks. “Well, you do this all the time!” The conversation spirals from a small issue to a relationship-threatening argument in seconds.
          • Shutdown and withdrawal: Overwhelmed by the emotional intensity, the person with ADHD goes silent. They feel too dysregulated to communicate. Their partner feels unheard and abandoned. The conflict remains unresolved.
          • Flooding: The person with ADHD becomes so emotionally overwhelmed that they can’t access the rational, verbal parts of their brain. Communication becomes impossible. Words get twisted. Intentions get misread.
          • Rumination without resolution: Hours or days after a conflict, the person with ADHD is still cycling through painful thoughts, while their partner has moved on. They bring up the argument again, re-litigating it endlessly.

        For more depth on how anger specifically shows up with ADHD, read our comprehensive article on ADHD and anger management. [4]

        These communication patterns are exhausting for both partners. The non-ADHD partner often feels like they’re walking on eggshells, carefully monitoring what they say to avoid triggering an emotional explosion. The person with ADHD feels perpetually misunderstood and criticized. Both partners end up emotionally depleted. [3]

        What Partners of People with ADHD Need to Know

        If you’re in a relationship with someone who has ADHD, understanding the neurology behind their emotional responses is crucial for your own wellbeing and the health of your partnership. [5]

        It’s Not Personal, Even Though It Feels That Way

        When your partner with ADHD has an emotional outburst, accuses you of not caring, or withdraws emotionally, the trigger is often their own dysregulation—not actually your failure or lack of love. This is intellectually difficult to accept in the moment when you’re being blamed, but it’s neurologically true. Their emotional system is misfiring, not their love for you.

        That said: understanding this doesn’t mean accepting emotional abuse. There’s a difference between a dysregulated response and deliberate cruelty. Boundaries still matter.

        Your Emotional Needs Matter Too

        Many partners of people with ADHD develop a hypervigilant caretaking role. They manage their own emotions carefully, prioritize their partner’s emotional state, and suppress their own needs to keep the peace. Over time, this causes resentment and burnout.

        You cannot regulate another person’s emotions for them. You cannot prevent their emotional dysregulation by being perfect. And you shouldn’t have to. Your emotional wellbeing is equally important. A healthy relationship requires both partners’ needs to matter.

        Professional Support Is Often Necessary

        Many couples managing ADHD emotional dysregulation relationships benefit enormously from therapy—both individual and couples work. A therapist trained in ADHD can help the person with ADHD understand their neurological patterns and develop skills. They can also help both partners communicate more effectively and rebuild trust.

        Medication and Treatment Help

        Stimulant medications can significantly improve emotional regulation in people with ADHD by enhancing prefrontal cortex function. While medication isn’t a cure for relationship problems, it often creates enough improvement in emotional control that other strategies become more effective. If your partner hasn’t explored medication or treatment options, gently encouraging this conversation might be important.

        Couples Strategies That Actually Work

        Breaking the cycle of ADHD emotional dysregulation relationships requires deliberate, consistent effort from both partners. Here are evidence-backed strategies that work:

        1. Build a “Pause and Reset” Protocol

        When emotional intensity escalates, communication often breaks down. Establish a pre-agreed signal that either partner can use to pause the conversation. This might be a word, a hand gesture, or a simple phrase: “I need to pause.”

        The agreement is that when someone says this, both partners stop. No further argument happens in that moment. The person who called the pause takes space to regulate—maybe 20 minutes, maybe longer. This prevents flooding, de-escalates, and often allows both partners to approach the issue more rationally later.

        2. Schedule “State of the Union” Conversations

        Rather than addressing relationship issues in the heat of the moment (when dysregulation is highest), schedule a weekly or bi-weekly conversation dedicated to what’s working and what needs attention. This conversation happens when both partners are calm, have time, and can be intentional.

        The structure matters: Each person shares three specific things they appreciated about their partner that week, then raises one issue they’d like to discuss. Keep it short—15 to 20 minutes. This prevents the accumulation of resentments and allows for calmer problem-solving.

        3. Practice Validation Before Problem-Solving

        One of the most damaging patterns in relationships affected by ADHD emotional dysregulation relationships is that partners jump straight to problem-solving or defending themselves, skipping validation entirely.

        Try this instead: When your partner shares something emotional, first validate what they’ve said before addressing the content. “That sounds really frustrating. I can see why you’d feel that way.” This takes 10 seconds and often prevents the entire conversation from becoming defensive.

        4. Use “I” Statements and Assume Positive Intent

        Instead of: “You never listen to me. You’re so selfish,” try: “I felt hurt when I wasn’t heard, and I need us to find a way to talk where we both feel respected.”

        For the person with ADHD, try consciously assuming your partner’s best intentions. When they raise a concern, the default assumption is that they’re trying to improve the relationship, not criticize you. This is cognitively difficult when your brain is in threat mode, but it’s transformative when you practice it consistently.

        5. Address Rejection Sensitivity Directly

        If your partner has RSD, you might explicitly say: “I want to talk about something that bothered me, and I want you to know that this isn’t about you as a person. I’m not rejecting you. I’m addressing a specific situation.”

        Some couples even use a pre-agreed label: “This is a logistical issue, not a relationship issue.” This helps the brain categorize the conversation correctly and prevents the amygdala from hijacking the response.

        6. Build in Positive Connection Regularly

        When a relationship is struggling with emotional dysregulation, both partners can become focused on problems and conflict. Deliberately build in moments of positive connection: a 10-minute conversation without phones, physical affection, laughter, shared activities.

        This isn’t superficial. Positive emotional experiences actually build resilience and make both partners more able to handle conflict productively.

        7. Consider Professional Support

        A therapist or couples counselor trained in ADHD can teach you both skills specific to managing emotional dysregulation. Cognitive-behavioral therapy (CBT) and dialectical behavior therapy (DBT) have both shown effectiveness for emotion regulation challenges. It’s not a sign of failure; it’s smart problem-solving.

        Moving Forward

        Relationships affected by ADHD emotional dysregulation relationships are challenging, but they’re absolutely improvable. The neurology is real, but so is neuroplasticity. The patterns are entrenched, but they can change with intentional effort and the right support.

        What makes the difference isn’t a partner who never gets dysregulated or a partner who never makes mistakes. It’s two people who understand what’s actually happening, who commit to growth, and who build structures and skills to manage the dysregulation when it shows up.

        If you’re struggling with this dynamic, know that you’re not alone—and there is a path forward.

        Disclaimer: This article is for informational purposes only and does not constitute medical or psychological advice. Consult a qualified mental health professional before making changes to your relationship approach or seeking treatment for ADHD. [1]

        Last updated: 2026-05-11

        About the Author

        Published by Rational Growth. Our health, psychology, education, and investing content is reviewed against primary sources, clinical guidance where relevant, and real-world testing. See our editorial standards for sourcing and update practices.


        Your Next Steps

        • Today: Pick one idea from this article and try it before bed tonight.
        • This week: Track your results for 5 days — even a simple notes app works.
        • Next 30 days: Review what worked, drop what didn’t, and build your personal system.

        Disclaimer: This article is for educational and informational purposes only. It is not a substitute for professional medical advice, diagnosis, or treatment. Always consult a qualified healthcare provider with any questions about a medical condition. [2]

        References

        1. Dolapoglu, N. (2025). The relationship between attention deficit hyperactivity disorder, emotion regulation difficulties, and sleep quality in adults. PMC. Link
        2. Slobodin, O. (2025). A controlled study of emotional dysfunction in adult women with ADHD. PMC. Link
        3. Knies, S., et al. (2021). [Study on ADHD symptoms and relationship satisfaction]. Cited in Simply Psychology. Link
        4. Bruner, E., et al. (2015). [Study on ADHD and relationship quality]. Cited in Simply Psychology. Link
        5. Zeides Taubin, T., & Maeir, A. (2023). [Qualitative study on women partnered with men with ADHD]. Cited in Simply Psychology. Link
        6. Barkley, R. A., et al. (2008). [Study on ADHD and relationship dissatisfaction]. Cited in Psychology Today. Link

How Black Holes Form: From Dying Stars to Cosmic


How Black Holes Form: From Dying Stars to Cosmic Singularities

When I first learned about black holes in my physics classes, I remember the disorienting feeling of trying to wrap my head around an object so dense that not even light can escape it. Black holes aren’t just theoretical curiosities—they’re one of the most fascinating and well-documented phenomena in modern astronomy. Understanding how black holes form gives us insight into stellar evolution, the nature of spacetime itself, and the violent endpoints of massive cosmic objects. Whether you’re curious about the universe or looking to expand your scientific literacy, this deep dive into black hole formation will reshape how you think about the cosmos.

Related: solar system guide

[3]

The Birth of a Black Hole: Stellar Collapse and Gravity’s Ultimate Victory

Black holes don’t appear out of nowhere—they’re born from the catastrophic collapse of massive stars. When a star reaches the end of its life cycle, particularly if it’s sufficiently massive, gravity wins a battle it’s been fighting for billions of years. For most of a star’s existence, the outward pressure from nuclear fusion in its core counterbalances the inward crush of gravity. But when a massive star exhausts its nuclear fuel, this equilibrium collapses (quite literally).

The process begins with what astronomers call a supernova explosion—the violent death throes of a star. During this event, the star’s outer layers are blown away in a spectacular blast that can briefly outshine an entire galaxy of billions of stars. What remains behind is the core, and depending on the star’s original mass, this core will become one of three objects: a neutron star, or, in the case of the most massive stars, a black hole (Ghez, 2020).

For a black hole to form, the stellar remnant must be so massive that no known force can prevent its gravitational collapse. This typically requires a star that was at least 20-25 solar masses before it exploded. The inward force of gravity becomes so overwhelming that electrons are forced into protons, creating neutrons. Then even neutrons can’t resist—everything collapses into an infinitely dense point called a singularity, surrounded by an event horizon—the boundary beyond which nothing can escape.

The Event Horizon: The Point of No Return

One of the most mind-bending aspects of black hole formation is the event horizon, which marks the boundary of a black hole. At this threshold, the escape velocity—the speed needed to break free from an object’s gravitational pull—equals the speed of light. Since nothing can travel faster than light, nothing can escape once it crosses this boundary, not even photons of light itself.

The size of the event horizon is determined by a measurement called the Schwarzschild radius, named after physicist Karl Schwarzschild. For Earth, the Schwarzschild radius is about 9 millimeters. If we could compress our entire planet into a sphere smaller than a marble without altering its mass, it would become a black hole. For the Sun, the Schwarzschild radius is roughly 3 kilometers. This demonstrates something counterintuitive: a black hole isn’t necessarily about density in the way we typically think about it—it’s about having a tremendous amount of mass concentrated in a small enough space.

The event horizon itself is one of the most peculiar features in the universe. To an outside observer, time appears to slow down as an object approaches the event horizon. An astronaut falling into a black hole would appear, from Earth, to move more and more slowly, eventually freezing at the horizon itself—forever. Yet from the astronaut’s perspective, they would experience time normally and cross the event horizon in finite time, although they’d be violently stretched by tidal forces (a phenomenon called spaghettification) before reaching the singularity.

Stellar Mass Black Holes: The Most Common Type

When we talk about how black holes form through stellar collapse, we’re primarily discussing stellar mass black holes—the most frequently observed type. These range from a few solar masses to around 20 solar masses. We know they exist because we can observe their effects on nearby stars and gas. For instance, in the binary system Cygnus X-1, a black hole orbits a blue supergiant star, pulling material from it and creating an accretion disk that emits intense X-rays we can detect from Earth.

The formation process is well-documented. A massive star lives its life relatively normally until it reaches the end—typically after just a few million years, since massive stars burn through their fuel quickly. The larger the star, the shorter its lifespan. When the fuel runs out, the core can no longer support itself against gravity. The star collapses inward catastrophically. The rebounding shock wave from this collapse tears through the star’s outer layers in a supernova explosion, but the core itself keeps collapsing, unrelenting, until a black hole is born.

Astronomers have identified numerous stellar mass black holes through careful observation and measurement. The evidence is compelling: we measure the orbital speeds of stars and gas around these invisible objects, apply Kepler’s laws, and calculate masses that can only be explained by black holes. X-ray observations reveal the telltale signature of material heating as it spirals toward the event horizon. [1]

Supermassive Black Holes: The Universe’s Gentle Giants

While stellar mass black holes form through the collapse of dying stars, supermassive black holes—millions to billions of times the mass of our Sun—likely form through different mechanisms. These cosmic behemoths sit at the centers of most large galaxies, including our own Milky Way. The black hole at our galaxy’s center, called Sagittarius A*, contains about 4 million solar masses. [2]

[4]

How such enormous black holes form remains one of astronomy’s greatest puzzles. One leading theory suggests they grew from stellar mass black holes through repeated mergers and by consuming material over billions of years. When galaxies collide and merge, their central black holes may spiral together and merge, creating a larger black hole. Also, a growing black hole at the center of a galaxy can become a gravitational sink, drawing in stars, gas, and other matter, growing ever larger. [5]

Another intriguing possibility is that supermassive black holes formed more directly in the early universe from the collapse of massive gas clouds before stars even existed (Abbott, 2016). This would explain why we observe such massive black holes in the earliest galaxies, when there hasn’t been enough time for them to grow from stellar mass precursors through the slower process of accretion and mergers.

The Accretion Process: Feeding Black Holes and Powering the Universe

Once a black hole forms, it doesn’t simply sit alone and inactive. If material—gas, dust, or stellar debris—comes within the black hole’s gravitational reach, the black hole can consume it. This process is called accretion, and You need to understanding why black holes become some of the brightest objects in the universe.

As material falls toward a black hole, it doesn’t immediately plunge across the event horizon. Instead, it forms an accretion disk, similar to water swirling around a drain. This disk heats up due to friction between particles moving at different speeds. The innermost regions of the disk, closest to the event horizon, reach temperatures of millions of degrees and emit intense radiation across the electromagnetic spectrum—X-rays, ultraviolet light, and visible light.

This radiation process is phenomenally efficient. When matter accretes onto a black hole, the conversion of gravitational potential energy into radiation is far more efficient than nuclear fusion. A black hole can convert up to 40 percent of the rest mass energy of infalling material into radiation, whereas nuclear fusion converts only about 0.7 percent. This is why active galaxies with feeding supermassive black holes can outshine all their stars combined.

The study of how black holes form is intimately connected to understanding accretion, because the material and radiation we observe tell us about the black hole’s properties and formation history. By analyzing the radiation signatures from accretion disks, astronomers can infer the black hole’s mass, spin, and other characteristics.

Evidence and Observation: How We Know Black Holes Are Real

For decades, black holes were purely theoretical—predictions of Einstein’s general relativity that seemed too bizarre to exist. But modern astronomy has provided overwhelming evidence. In 2019, the Event Horizon Telescope collaboration produced the first direct image of a black hole’s shadow, revealing the silhouette of a black hole at the center of the galaxy M87 against the glowing accretion disk surrounding it. This image confirmed decades of theoretical predictions and provided visual confirmation of black hole formation on a supermassive scale.

before that breakthrough, we had indirect but powerful evidence. Gravitational wave detectors like LIGO have detected the gravitational waves produced when two black holes merge—a catastrophic collision that occurs as two orbiting black holes spiral inward and collide in a violent merger. These detections give us information about how black holes form, their masses, and their spins (Ghez, 2020). Each detection of merging black holes confirms that black holes are real, numerous, and formed through the processes we theorize.

X-ray observations have been crucial in identifying stellar mass black holes in binary systems. When a black hole orbits a normal star, it can pull material from that star, creating an accretion disk that glows in X-rays. These X-ray signatures, combined with measurements of the visible star’s orbital motion, allow us to calculate the mass of the invisible companion and confirm it’s a black hole.

The Implications of Black Hole Formation for Physics and Cosmology

Understanding how black holes form pushes us to the limits of our knowledge. Black holes represent a regime where gravity becomes so strong that quantum mechanics and general relativity come into conflict. Physicists are still working to develop a theory of quantum gravity that can describe what happens at the singularity—that infinitely dense point where our current physics breaks down.

The formation of black holes also teaches us about the ultimate fate of massive stars and the evolution of galaxies. Supermassive black holes at the centers of galaxies play a role in regulating galaxy growth and the formation of stars within galaxies. When a black hole becomes active and feeds on material, the energy released can blow away gas from the galaxy, shutting down star formation. This feedback mechanism may explain why galaxies aren’t larger and have fewer stars than we’d expect.

Also, black holes may have practical implications for physics that we’re only beginning to explore. Some theoretical physicists have speculated about using black holes as energy sources or even as portals to other parts of spacetime—though these remain highly speculative. More immediately, the study of black holes provides natural laboratories for testing our most fundamental theories about gravity and spacetime.

Sound familiar?

Conclusion: Cosmic Laboratories of Extreme Physics

Black holes form through one of the most dramatic processes in nature: the violent death and catastrophic collapse of the most massive stars in the universe. From stellar mass black holes born in supernovae to the supermassive black holes that anchor galaxies, these objects represent gravity in its most extreme form. The process of how black holes form continues to drive discovery in modern astronomy, from direct imaging to gravitational wave detection.

What makes this knowledge particularly valuable for professionals and knowledge workers is how it expands your conceptual toolkit. Understanding black hole formation requires grappling with non-intuitive ideas—spacetime curvature, event horizons, the relationship between mass and gravity—that cultivate more sophisticated thinking about complex systems in your own field. Whether you’re managing uncertainty, thinking about cause and effect in complicated environments, or simply wanting to maintain intellectual curiosity, engaging with black hole physics is a reminder of how much there is to learn.

The universe continues to surprise us with phenomena more extreme than we imagined. As technology improves and our theories advance, we’ll undoubtedly refine our understanding of how black holes form and what role they play in shaping the cosmos. For now, the fact that we can observe these objects at all—that we can photograph them, detect the gravitational waves from their mergers, and measure their properties—stands as testimony to the power of human curiosity and rigorous observation.

Your Next Steps

Last updated: 2026-05-11

About the Author

Published by Rational Growth. Our health, psychology, education, and investing content is reviewed against primary sources, clinical guidance where relevant, and real-world testing. See our editorial standards for sourcing and update practices.

References

  1. LIGO-Virgo-KAGRA Collaboration (2025). GW241011 and GW241110: Exploring Binary Formation and Fundamental Physics with Asymmetric, High-Spin Black Hole Coalescences. The Astrophysical Journal Letters. Link
  2. Kelly, B. J. et al. (2025). Gravitational-Wave Signatures of Massive Black Hole Formation. arXiv:2512.09197 [gr-qc]. Link
  3. NASA Physics of the Cosmos Program (n.d.). Massive Black Holes and the Evolution of Galaxies. NASA Science. Link
  4. Schirber, M. (2025). Heaviest Black Hole Merger Flouts a Forbidden Gap. Physics. Link
  5. Rees, M. J. (1984). Formation of Supermassive Black Holes by Direct Collapse. Nature. Link
  6. Abbott, B. P. et al. (LIGO Scientific Collaboration and Virgo Collaboration) (2016). Observation of Gravitational Waves from a Binary Black Hole Merger. Physical Review Letters. Link

How Search Engines Work


How Search Engines Work: From Crawling to Ranking Your Results

If you’re reading this, you’ve probably used a search engine today—maybe multiple times. You typed a question, hit enter, and within milliseconds, you got back thousands of results ranked by relevance. But have you ever wondered what happens in those milliseconds? How does Google (or Bing, or DuckDuckGo) know which pages are most useful for your query? Understanding how search engines work isn’t just academic curiosity; it’s practical knowledge that can help you find better information faster, evaluate sources more critically, and even improve your own online visibility if you create content.

Related: solar system guide

I’ve spent years teaching students how to research effectively, and I’ve noticed that those who understand the mechanics of search engines become dramatically better at finding reliable information. They ask smarter questions, they recognize when results might be biased, and they know how to refine searches to cut through the noise. Whether you’re a knowledge worker trying to stay ahead in your field, an entrepreneur building a web presence, or simply someone who wants to be more intentional about where your information comes from, understanding this process matters.

The Three Core Processes: Crawling, Indexing, and Ranking

When you ask a search engine a question, you’re not actually searching the entire internet in real-time. That would be impossibly slow. Instead, search engines maintain massive indexes—organized libraries of web content—that they’ve built over months and years. The process of creating and maintaining these indexes happens in three main stages: crawling, indexing, and ranking (Sullivan, 2023).

Crawling is the discovery phase. Search engines deploy automated programs called crawlers (also called spiders or bots) that continuously browse the web, following links from page to page. These crawlers start from known pages and follow every hyperlink they find, documenting the content they discover. Think of crawlers as tireless librarians walking through an infinite library, jotting down what they find on each shelf. Google’s primary crawler is called Googlebot, and it crawls billions of pages every single day. But crawlers don’t have unlimited time or resources, so they prioritize: they revisit frequently updated sites more often, they focus on pages that seem important based on how many other pages link to them, and they respect certain instructions webmasters leave in files called robots.txt that essentially say “don’t crawl this part.” [1]

Indexing happens next. Once a crawler has discovered and downloaded a page, that page’s content gets analyzed and added to the search engine’s index. The search engine extracts key information: the page’s title, its main content, metadata, images, and links. It notes what words appear on the page and where they appear—words in headings are weighted differently than words in body text, for example. This indexing process is astonishingly complex. Search engines need to understand not just the words on a page, but their semantic meaning: what the page is actually about. This is why modern search engines use artificial intelligence and machine learning models to understand language context (Moz, 2023). [2]

Ranking is the final stage—and the one most people care about. When you submit a search query, the search engine doesn’t hand you its entire index. Instead, it filters for relevant pages and then sorts them by predicted usefulness. This is where the real intelligence lives. Search engines evaluate hundreds of factors when determining rank, and how search engines work depends heavily on these ranking algorithms, which are proprietary and constantly evolving. We don’t know the exact formula, but research and reverse-engineering by the SEO community has revealed that factors like backlinks (votes of confidence from other websites), page speed, mobile-friendliness, content quality, user engagement signals, and topical authority all play roles. [3]

The Role of Backlinks and Authority

One of the most important factors in how search engines work is the concept of backlinks—hyperlinks pointing to a page from other websites. When Google was founded by Larry Page and Sergey Brin, one of their key insights was treating backlinks like academic citations. If many reputable websites link to a page, that page probably contains valuable information. This idea became the foundation of PageRank, Google’s original ranking algorithm, and it remains influential today (Page & Brin, 1998). [4]

But not all backlinks are created equal. A link from a major publication like The New York Times carries far more weight than a link from an obscure blog. Search engines evaluate the authority of linking domains—essentially, they ask: “Is the site linking to this page itself trustworthy and relevant?” This creates a kind of reputation economy on the web. High-authority sites naturally accumulate more valuable backlinks, which reinforces their authority, which means their links carry more weight when they link to other pages. [5]

This system isn’t perfect. People have tried to game it for years, creating thousands of low-quality sites just to generate backlinks to a money-making site. To combat this, Google constantly updates its algorithms to detect and penalize unnatural linking patterns. The infamous Penguin update (rolled out in 2012) was specifically designed to devalue sites that engaged in aggressive link manipulation. If you’re trying to build online visibility for your own work, understanding this means you should focus on creating genuinely valuable content that people naturally want to link to, rather than chasing backlinks themselves.

Content Quality and Semantic Understanding

In the early days of search, search engine rankings were more straightforward: match keywords, count how many times they appear, rank accordingly. That system could be gamed easily by keyword stuffing—writing something like “best pizza best pizza best pizza” over and over—which annoyed users and degraded search results.

Modern search engines have moved far beyond simple keyword matching. They use natural language processing and machine learning to understand what content is actually about, and more how useful it is. Google’s BERT update (2019) was a major milestone: it helped Google understand the nuances of language and the intent behind queries. When you search for “apple,” the search engine needs to determine whether you want information about the fruit or the tech company. BERT and similar models examine context across the entire query and document to make better predictions.

This shift has huge implications for anyone creating content. It means that simply stuffing your page with keywords is counterproductive. Search engines are explicitly looking for pages that comprehensively address a topic, are written clearly, cite credible sources, and match what the searcher actually intended to find. This is good news if you care about quality information—the incentive structure increasingly rewards genuinely useful content.

User Signals and Engagement Metrics

Search engines also pay attention to how users interact with search results. This is where your behavior feeds back into the ranking system. When you click on a search result and stay on that page for several minutes, you’re sending a signal: “This result was relevant and useful.” Conversely, when you click a result and immediately go back to search for something else (called a “bounce”), you’re signaling: “This wasn’t what I was looking for.” These user engagement signals help search engines refine their understanding of which pages are truly valuable (Moz, 2023).

This creates an interesting feedback loop. Highly-ranked pages tend to get more clicks simply because they’re more visible. Those clicks generate engagement signals that reinforce their ranking. Meanwhile, a high-quality page ranked lower gets fewer chances to prove its value. This is why SEO professionals focus so heavily on getting into the top three results—there’s a massive cliff in click-through rates between position one and position ten.

For knowledge workers and researchers, understanding these signals helps explain why you might encounter misinformation in search results. A well-optimized piece of misinformation that keeps users engaged (perhaps because it confirms what they already believe) might rank higher than more accurate but less optimized information. This argues for developing stronger critical evaluation skills and consulting multiple sources rather than trusting the top result blindly.

Personalization and the Filter Bubble Effect

Here’s something that surprises many people: the search results you see are not the same results your colleague or friend sees. Search engines personalize results based on your search history, location, device, and sometimes even inferred interests based on your Google account activity. This personalization is meant to improve relevance—showing you results that match your past behavior and context. If you’ve been researching renewable energy extensively, you’re more likely to see energy-related results elevated when you search for “sustainable future.”

This personalization creates what researcher Eli Pariser called the “filter bubble”—the tendency to be fed information that aligns with your existing beliefs and interests, which can limit exposure to alternative perspectives (Pariser, 2011). For professionals and learners, this is worth keeping in mind. If you consistently search within your field of expertise, search engines will reinforce that domain knowledge. But you might miss emerging ideas from adjacent fields. Deliberately searching outside your comfort zone, reading sources you disagree with, and using multiple search engines with different algorithms can help you break through filter bubbles.

Mobile-First Indexing and Technical Foundations

In 2021, Google officially shifted to mobile-first indexing for all websites. This reflects reality: more than half of all web traffic now comes from mobile devices. For how search engines work today, this means Google’s crawler primarily evaluates the mobile version of your website when deciding how to rank it. If your mobile site is slow, broken, or missing content that appears on desktop, your ranking will suffer accordingly.

This touches on the technical foundation of search engine ranking: page speed, mobile responsiveness, and the overall health of a website’s infrastructure. Search engines measure these using metrics like Core Web Vitals—page speed metrics that Google measures and uses as ranking factors. A slow website doesn’t rank as well as a fast one with similar content, all else being equal. For anyone publishing content online, optimizing these technical factors is just as important as writing great copy.

There are other technical elements worth knowing: structured data (markup that tells search engines what kind of content a page contains), secure HTTPS connections, proper site architecture and internal linking, and avoiding broken links. These aren’t optional niceties; they’re part of how search engines work now, and they directly impact visibility.

What This Means for You

Whether you’re trying to find better information or trying to be found, understanding how search engines work changes your strategy. If you’re a researcher or knowledge worker, understanding the ranking factors helps you spot when results might be biased toward popularity rather than accuracy. You’ll naturally drift toward cross-checking information across sources and being skeptical of clickbait that shoots to the top through engagement manipulation.

If you create content—whether it’s a blog, a course, a business website, or research you want to reach an audience—understanding how search engines work means you can optimize thoughtfully. You’ll focus on creating genuinely useful content that comprehensively addresses what your audience is searching for. You’ll write clear headlines and structure your content logically. You’ll ensure your technical infrastructure is sound. And you’ll naturally build authority through consistent, valuable output that others in your field want to link to and share.

The search engine landscape continues to evolve. Artificial intelligence is becoming more sophisticated at understanding intent and context. Voice search and visual search are growing. But the core principles—discovery through crawling, organization through indexing, and ranking through relevance signals—remain the foundation. As you continue learning and working in our information-rich world, remembering how search engines work helps you work through digital information more effectively and contribute to it more intelligently.

Your Next Steps

Last updated: 2026-05-11

About the Author

Published by Rational Growth. Our health, psychology, education, and investing content is reviewed against primary sources, clinical guidance where relevant, and real-world testing. See our editorial standards for sourcing and update practices.

References

  1. Alalaq, A. S. (2025). AI-Powered Search Engines. ShodhAI. Link
  2. Venkit, P. N. (2025). Search Engines in the AI Era: A Qualitative Understanding. ACM Digital Library. Link
  3. University of Wisconsin. (n.d.). Google and Other Search Engines. Information Literacy: A Practical Guide. Link
  4. Adedeji, A. A. (2023). Use of Search Engines as Predictors of Research Skills of Postgraduate Students. ScholarWorks. Link
  5. RSI International. (n.d.). Core Technologies in Semantic Search Engines. International Journal of Research in Innovation and Applied Sciences. Link
  6. EBSCO. (n.d.). Search Engines and Mathematics. Research Starters: Engineering. Link

How Black Holes Form: From Dying Stars to Cosmic


How Black Holes Form: The Cosmic Extreme and What It Teaches Us About the Universe

When I first learned about black holes in my physics class years ago, the concept felt almost like science fiction—a region of space where gravity becomes so intense that nothing, not even light, can escape. Yet black holes are one of the most rigorously confirmed predictions of Einstein’s general relativity, and we now know they’re common throughout the universe. Understanding how black holes form isn’t just academic curiosity; it reveals fundamental truths about stellar evolution, the nature of spacetime, and the ultimate fate of massive objects in the cosmos. For knowledge workers and lifelong learners, grasping these concepts strengthens your scientific literacy and provides a framework for understanding complexity itself—a skill that translates directly to problem-solving in professional life.

Related: solar system guide

[3]

The process of black hole formation is intimately connected to stellar death. Most black holes form when massive stars reach the end of their lives, and understanding this journey requires us to think about gravity, stellar processes, and the extreme conditions that exist at the cores of dying stars. you’ll see the science behind how black holes form, the different pathways that lead to their creation, and what observations have confirmed our theoretical predictions.

The Stellar Foundation: Why Massive Stars Matter

Black hole formation begins not with the black hole itself, but with the star that precedes it. Not all stars create black holes—only the most massive ones do. To understand why, we need to think about stellar balance and what happens when that balance breaks down.

Throughout most of a star’s life, it exists in a state of equilibrium. The outward pressure from nuclear fusion in the core counteracts the inward crush of gravity. This balance keeps the star stable for millions or billions of years. A star like our Sun will maintain this equilibrium for about 10 billion years. However, stars much more massive than the Sun—those with 20 or more solar masses—burn their fuel at tremendously faster rates. They exhaust their nuclear fuel in a few million years, a blink of an eye in cosmic time. [1]

When I think about stellar mass, it’s helpful to remember that gravity’s force increases dramatically with mass. A star that is 20 times more massive than the Sun isn’t just 20 times stronger in its gravitational pull—the relationship is more complex, involving the density distribution and the inverse-square law of gravity. These massive stars live fast and die young, and their deaths are spectacular. Understanding this pattern is essential to understanding how black holes form from stellar remnants (Tolman, 1939; Oppenheimer & Snyder, 1939).

The Supernova Collapse: When Fusion Runs Out

The critical moment in black hole formation occurs when a massive star exhausts its nuclear fuel. Let me walk you through what happens during this dramatic finale.

A massive star doesn’t burn just hydrogen like our Sun does. As it ages, it enters a process called nucleosynthesis, where increasingly heavy elements fuse in the core: hydrogen to helium, helium to carbon and oxygen, carbon to neon, and so on, up the periodic table. Each new fusion process burns faster than the last. Hydrogen burning might last millions of years, but silicon burning—the final stage—lasts only about a day.

When the star finally builds an iron core, fusion stops. Iron cannot undergo fusion to release energy; fusing iron consumes energy rather than releasing it. At this moment, the outward pressure from fusion suddenly vanishes, and gravity takes over completely. What happens next is one of the most violent events in the universe: the core collapses catastrophically. Within seconds, the entire iron core—perhaps the mass of our Sun compressed into a sphere the size of Earth—collapses inward.

This collapse is incredibly rapid. Material in the core falls inward at speeds approaching a quarter of the speed of light. As it falls, the density increases exponentially. At some point during this collapse, if the star is massive enough, the density becomes so extreme that black hole formation becomes inevitable. The core crosses what physicists call the event horizon—the point of no return from which nothing can escape (Schwarzschild, 1916).

The energy released during this catastrophic collapse becomes the power source for a supernova explosion. Neutrinos streaming from the collapsing core transfer energy to the outer layers of the star, blasting them outward at speeds of 10,000 kilometers per second or faster. For a brief moment, the supernova can outshine an entire galaxy of billions of stars. But underneath this cosmic fireworks display, something darker has been born: a black hole.

The Event Horizon: Where Physics Becomes Extreme

To truly understand how black holes form, we need to understand the event horizon—the defining feature that makes a black hole a black hole. The event horizon isn’t a physical surface; it’s a boundary in spacetime itself.

The radius of the event horizon is determined by the Schwarzschild radius formula: r = 2GM/c², where G is the gravitational constant, M is the mass of the black hole, and c is the speed of light. This elegant equation tells us that the event horizon size depends only on mass. A black hole with the mass of our Sun would have an event horizon with a radius of about 3 kilometers. A black hole with 10 solar masses would have a radius of 30 kilometers.

What’s remarkable is that once matter crosses the event horizon, it cannot escape, even in principle. This isn’t because of some magical barrier; rather, it’s because spacetime itself is so warped that all possible paths leading forward in time point toward the singularity. Light itself, the fastest thing in the universe, cannot escape. This is why black holes are black—they don’t emit light; they absorb it. [2]

[4]

The conditions near the event horizon are extreme beyond human comprehension. Tidal forces—the difference in gravitational pull between one side of an object and another—become infinitely large. An astronaut falling feet-first into a stellar-mass black hole would be “spaghettified,” stretched like spaghetti by the differential gravity. Yet despite these extremes, general relativity predicts that the event horizon itself is not fundamentally different from any other region of spacetime. An observer falling through the event horizon wouldn’t experience anything special at the moment of crossing. [5]

Different Types of Black Holes: Multiple Formation Pathways

When we talk about how black holes form, there isn’t just one pathway. Astronomers have identified several types of black holes formed through different mechanisms.

Stellar-mass black holes form from the collapse of massive stars, as we’ve discussed. These typically range from about 5 to 20 solar masses. They form when stars with initial masses around 20 or greater solar masses reach the end of their lives. The supernova explosion ejects much of the star’s material into space, but the core collapses to form a black hole.

Intermediate-mass black holes are less well understood but appear to exist, with masses ranging from hundreds to thousands of solar masses. Their formation mechanism remains an active area of research. One possibility is that they form through collisions and mergers of smaller black holes in dense stellar clusters.

Supermassive black holes lurk at the centers of most large galaxies, including our own Milky Way. Sagittarius A*, the black hole at our galaxy’s center, has a mass of about 4 million suns. How these supermassive black holes formed is still debated. They may have grown from stellar-mass black holes through the accretion of matter and mergers, though this growth process cannot fully explain their sizes. Alternatively, they may have formed through the direct collapse of massive gas clouds in the early universe (Rees, 1984).

Understanding these different formation pathways enriches our picture of black hole astrophysics and reminds us that the universe contains multiple solutions to similar problems—a principle that extends far beyond physics into problem-solving more generally.

Observational Confirmation: From Theory to Evidence

For decades, black holes remained theoretical—predictions of Einstein’s equations with no observational confirmation. That changed dramatically in recent years. In 2015, the Laser Interferometer Gravitational-Wave Observatory (LIGO) directly detected gravitational waves from the merger of two black holes. These ripples in spacetime, predicted by Einstein a century earlier, provided the first direct evidence of black holes and their interactions (Abbott et al., 2016).

This discovery was revolutionary. By detecting the gravitational waves from merging black holes, astronomers could directly observe these objects and measure their properties. Subsequent LIGO observations have detected dozens of black hole mergers, allowing us to study the population of stellar-mass black holes throughout the universe.

But the observational revolution didn’t stop there. In 2019, the Event Horizon Telescope collaboration captured the first image of a black hole’s shadow—the dark region caused by the warped spacetime and light capture around the supermassive black hole M87 at the center of a distant galaxy. This image, formed by coordinating radio telescopes across the Earth, showed that our theoretical predictions about black hole shadows matched reality with stunning precision.

More recently, observations of electromagnetic radiation from objects falling into black holes have provided insights into the accretion process. Matter doesn’t fall quietly into black holes; it heats up, emits x-rays, and sometimes produces jets of material traveling at near-light speeds. These observations help us understand the details of how how black holes form through accretion and how they grow over time.

The Singularity Question: Where Physics Breaks Down

At the heart of every black hole lies the singularity—the predicted point where density becomes infinite and our current physics breaks down. This is perhaps the deepest mystery in black hole physics.

General relativity predicts that matter falling into a black hole is crushed to infinite density at a single point in spacetime. However, physicists suspect this prediction is incomplete. At the densities and energies present in a black hole’s core, quantum effects should become important. Yet we don’t have a complete theory of quantum gravity—a theory that would unite Einstein’s general relativity with quantum mechanics.

This gap in our understanding is humbling. It reminds us that even our most successful theories have limits. Understanding black holes isn’t just about explaining gravity; it’s about recognizing the fundamental limits of human knowledge and the deep questions that remain unanswered.

Some physicists speculate that quantum gravity effects might eliminate the singularity entirely, replacing it with some other quantum structure. Others wonder if information falling into black holes is truly destroyed, or if there’s a way to recover it—a question that touches on the deepest foundations of quantum mechanics.

Conclusion: Black Holes as Teachers

Understanding how black holes form teaches us far more than astrophysics. It shows us the predictive power of mathematics and theory. Einstein wrote down his field equations without any hope that such extreme objects existed, yet decades later we found them. It demonstrates the importance of extreme conditions for revealing fundamental truths—we learn more about gravity by studying black holes than by studying ordinary stars. And it reminds us that the universe contains mysteries we’re only beginning to understand.

For knowledge workers and self-improvement enthusiasts, the lessons extend beyond science itself. The systematic approach used to understand black holes—from theoretical prediction to observational confirmation—is the same approach we should apply to personal challenges. We form hypotheses about what works, test them against reality, and refine our understanding based on evidence. Black holes, in their own way, are a testament to the power of curiosity, persistence, and willingness to think at the edges of human understanding.

Your Next Steps

Last updated: 2026-05-11

About the Author

Published by Rational Growth. Our health, psychology, education, and investing content is reviewed against primary sources, clinical guidance where relevant, and real-world testing. See our editorial standards for sourcing and update practices.

References

  1. Bueno, P., Cano, P. A., Hennigar, R. A., & Murcia, Á. J. (2025). Dynamical Formation of Regular Black Holes. Physical Review Letters. Link
  2. LIGO-Virgo-KAGRA Collaboration (2025). GW241011 and GW241110: Exploring Binary Formation and Fundamental Physics with Asymmetric, High-Spin Black Hole Coalescences. The Astrophysical Journal Letters. Link
  3. NASA Science (n.d.). Massive Black Holes and the Evolution of Galaxies. NASA Physics of the Cosmos Program. Link
  4. Caltech LIGO Team (2025). Colliding Black Holes Might Have Formed from Earlier Cosmic Smashups. Caltech News. Link
  5. Fairhurst, S. et al. (2025). Study: Pair of Distinct Black Hole Mergers Reveals Clues on How They Form and Evolve. UNLV News. Link

How Black Holes Form: From Dying Stars to Cosmic


How Black Holes Form: From Dying Stars to Cosmic Singularities

If you’ve ever wondered what happens at the end of a star’s life, you’re touching on one of the most profound mysteries in physics. Black holes represent the ultimate fate of massive stars—regions of spacetime so extreme that nothing, not even light, can escape their gravitational pull. Understanding how black holes form connects us to fundamental truths about the universe, matter, and the laws governing everything we observe. In my experience teaching astronomy concepts to adults, I’ve found that people find black holes fascinating precisely because they’re both terrifying and beautiful: they challenge our intuition about how reality works.

Related: solar system guide

[3]

What makes this topic particularly valuable for knowledge workers and professionals is that black hole physics reflects broader principles about systems reaching critical thresholds. The process of how black holes form teaches us about cause and effect on cosmic scales, resource depletion, and irreversible change—concepts that apply metaphorically to personal and professional growth as well.

The Stellar Death Prerequisite: Why Most Stars Don’t Become Black Holes

Not every star becomes a black hole. In fact, the vast majority won’t. To understand how black holes form, we first need to understand the fundamental requirement: mass. Specifically, a star must be massive enough to undergo certain stages of stellar evolution that lead to black hole formation.

Our sun, for instance, will never become a black hole. When it exhausts its hydrogen fuel in about 5 billion years, it will swell into a red giant, shed its outer layers, and leave behind a white dwarf—a dense but stable stellar remnant about the size of Earth. This is the fate of stars with masses up to roughly 20-25 solar masses (where one solar mass equals the sun’s mass).

For a star to eventually form a black hole, it typically needs to be at least 20-25 solar masses, though some research suggests even lower mass limits under certain conditions (Abbott et al., 2016). These massive stars are rare. In our Milky Way galaxy, fewer than one in a thousand stars are massive enough to end their lives as black holes. The rarity of black hole progenitors is one reason black holes were purely theoretical for decades before we had observational evidence of their existence.

During the star’s main-sequence lifetime—the long, stable period where it fuses hydrogen into helium—this mass requirement doesn’t reveal itself. A massive star looks, from certain perspectives, not so different from a smaller star. But internally, the physics is radically different. A massive star burns through its fuel at a ferocious rate. Where our sun will live for about 10 billion years, a massive 25-solar-mass star will burn out in only a few million years (Kippenhahn et al., 2012).

The Nuclear Burning Sequence: Layering Elements Toward Collapse

To truly understand how black holes form, we need to grasp what happens in the final stages of a massive star’s life. When a star begins to run low on hydrogen fuel, something critical occurs: the core contracts and heats up. This higher temperature allows the star to begin fusing helium into heavier elements. This process repeats.

Massive stars engage in what physicists call the “iron catastrophe.” After fusing helium, a massive star’s core begins fusing carbon and oxygen. When those are depleted, the core contracts and heats further, allowing silicon fusion. Each stage produces heavier and heavier elements: carbon, oxygen, neon, magnesium, silicon, and eventually iron.

Iron is the critical threshold. Unlike previous fusion stages, fusing iron doesn’t release energy—it consumes it. When the core becomes predominantly iron, no further fusion can occur. The star has reached its breaking point. What happens next is catastrophic.

At this moment, the star’s core comprises an iron ball roughly the size of Earth, with a mass of about 1.4 times our sun’s mass, densely packed and supported only by electron degeneracy pressure (the quantum mechanical resistance of electrons to being compressed into the same space). This core temperature reaches about 1 billion Kelvin. The pressure is almost unimaginable—the weight of the entire overlying star pressing down on this iron core.

The Supernova Explosion and Neutron Star Formation: The First Stage of Collapse

When the iron core can no longer support itself, the situation develops rapidly. Electrons are forced into protons, creating neutrons and releasing ghostly particles called electron neutrinos. This “neutronization” releases immense energy. The core collapses catastrophically, falling inward at speeds approaching 50,000 kilometers per second. This isn’t a gentle contraction—it’s violent and irreversible.

The infalling material suddenly encounters the incompressibility of nuclear matter. For a brief moment, the core’s density skyrockets, and the infall halts abruptly. This creates a shockwave that propagates outward through the star, heating material to billions of degrees. The result is a supernova explosion—one of the most violent events in the universe. The outer layers of the star are blasted into space at speeds of 10,000 to 30,000 kilometers per second, creating new heavy elements and seeding interstellar space with material that will eventually form new stars and planets.

In this stage, how black holes form depends critically on the original star’s mass. If the core’s mass is less than about 2.7 solar masses, the neutron pressure—the resistance of neutrons to further compression—can hold up the star’s matter. The result is a neutron star, one of the most extreme objects known, yet still not a black hole. A neutron star is so dense that a teaspoon of its material would weigh as much as Mount Everest. [2]

But if the core’s mass is greater than about 2.7 solar masses, even the neutron pressure cannot halt the collapse. The star’s fate is sealed. [1]

Beyond the Neutron Star Limit: The Formation of the Event Horizon

This is where things get genuinely strange. When the core exceeds the neutron star mass limit, nothing in physics can stop the collapse. Matter compresses past nuclear density to increasingly extreme densities. Within milliseconds, the core collapses to a radius of a few kilometers, then kilometers, then smaller. [5]

[4]

At a critical radius called the Schwarzschild radius (named after physicist Karl Schwarzschild, who first calculated this mathematically in 1916), something extraordinary happens: the escape velocity exceeds the speed of light. Since nothing can travel faster than light, nothing can escape from within this radius. An event horizon—the point of no return—forms. How black holes form is fundamentally about reaching this Schwarzschild radius.

The Schwarzschild radius depends on mass. For a 3-solar-mass black hole, it would be about 9 kilometers in diameter. For a 10-solar-mass black hole, roughly 30 kilometers. For our sun—if somehow compressed to black hole density—it would be about 6 kilometers across.

The remarkable insight is that the interior structure might be even stranger than the exterior suggests. Einstein’s theory of general relativity predicts that at the absolute center lies a singularity—a point where density becomes infinite and our current physics breaks down completely. Whether actual singularities exist, or whether quantum gravity effects prevent their formation, remains an open question in theoretical physics (Hawking, 2014).

Observational Evidence: How We Know Black Holes Form

For most of the 20th century, black holes were mathematical predictions, not observed reality. That changed in the 1970s and particularly in the past two decades. We now have compelling evidence that how black holes form is not just theoretical speculation—it’s real astrophysics.

The strongest evidence comes from X-ray astronomy and gravitational wave detection. When a massive star ends its life and collapses, leaving a black hole, that black hole typically exists in a binary system with a companion star. Material from the companion star spirals toward the black hole, heating to millions of degrees and emitting intense X-rays. Objects like Cygnus X-1 showed X-ray signatures consistent with black holes decades ago.

More recently, gravitational wave detectors like LIGO (Laser Interferometer Gravitational-Wave Observatory) have detected the collision and merger of black holes directly. These observations have revealed that stellar-mass black holes form through the process of how black holes form from massive stars, and we’ve now observed dozens of confirmed mergers (Abbott et al., 2016). Each detection teaches us more about the physics of collapse and black hole formation.

Perhaps most dramatically, in 2019, the Event Horizon Telescope collaboration released the first-ever direct image of a black hole’s shadow—the dark region at the center of the galaxy M87. This image, captured using synchronized telescopes across Earth, provided visual confirmation that how black holes form produces real objects matching our theoretical predictions.

Why This Matters Beyond Astronomy

You might ask: why should professionals and knowledge workers care about how black holes form? The answer connects to several valuable insights. First, understanding stellar physics teaches us about systems with clear failure points. Just as a massive star inexorably approaches its end when its nuclear fuel depletes, systems in business, health, and personal development have critical thresholds. Understanding these thresholds helps us recognize when intervention is needed.

Second, black holes exemplify what happens when systems reach extreme states. The physics of black hole formation shows us that at certain densities and temperatures, the universe’s normal rules no longer apply. This mirrors how certain critical situations—organizational crises, health emergencies, or personal breakdowns—require fundamentally different approaches than routine management. Treating an extreme situation with standard methods fails, just as Newtonian physics fails near a black hole.

Third, the scientific investigation of black holes demonstrates how we gain knowledge about things we cannot directly observe. For decades, physicists reasoned about black holes through mathematics and indirect evidence. This scientific humility—making claims based on evidence while acknowledging limitations—is a skill valuable in any knowledge field.

Current Research and Open Questions

Contemporary research continues to refine our understanding of how black holes form. One active area investigates whether truly isolated black holes can form directly from a single massive star, or whether most stellar-mass black holes result from mergers of neutron stars or other black holes. The gravitational wave detections have complicated this picture by revealing mergers that challenge our previous mass expectations.

Another frontier involves understanding the relationship between stellar-mass black holes and supermassive black holes at galaxy centers. Supermassive black holes (containing millions to billions of solar masses) likely form through different mechanisms than stellar black holes, though some theories propose that stellar-mass black holes can merge and grow over cosmic time into supermassive ones.

Finally, the question of what truly happens at a black hole’s singularity remains unresolved. A complete theory of quantum gravity—combining quantum mechanics with general relativity—might reveal that singularities don’t actually form, or that they’re smoothed out by quantum effects. This represents one of theoretical physics’ deepest unsolved problems.

Sound familiar?

Conclusion: The Ultimate Cosmic Endpoint

How black holes form represents one of the universe’s most dramatic processes: the conversion of stellar matter into objects so extreme they bend spacetime itself. Beginning with massive stars burning through their fuel at ferocious rates, proceeding through supernova explosions, and culminating in the formation of the event horizon, black hole formation exemplifies physics at its most extreme and consequential.

From a massive star’s perspective, the path is inevitable. Once a star reaches sufficient mass, the sequence of nuclear burning, neutron star formation, and eventual collapse follows from physics alone. There’s no escape, no reprieve. Yet this cosmic drama has produced some of science’s greatest insights into gravity, spacetime, and the nature of reality itself.

For those of us interested in understanding our universe deeply, how black holes form offers a fascinating window into extremes—both scientific extremes and the metaphorical extremes we sometimes encounter in our own growth and challenges. The universe has much to teach us through its most dramatic events.

Your Next Steps

  • Today: Pick one idea from this article and try it before bed tonight.
  • This week: Track your results for 5 days — even a simple notes app works.
  • Next 30 days: Review what worked, drop what didn’t, and build your personal system.

Last updated: 2026-05-11

About the Author

Published by Rational Growth. Our health, psychology, education, and investing content is reviewed against primary sources, clinical guidance where relevant, and real-world testing. See our editorial standards for sourcing and update practices.

References

  1. Bueno, P., Cano, P. A., Hennigar, R. A., & Murcia, Á. J. (2025). Dynamical Formation of Regular Black Holes. Physical Review Letters. Link
  2. LIGO-Virgo-KAGRA Collaboration (2025). GW241011 and GW241110: Exploring Binary Formation and Fundamental Physics with Asymmetric, High-Spin Black Hole Coalescences. The Astrophysical Journal Letters. Link
  3. NASA Science (n.d.). Massive Black Holes and the Evolution of Galaxies. NASA Physics of the Cosmos. Link
  4. Tan, J. C. (2025). A New Model for Early Black Hole Formation Could Revolutionize Cosmology. Astrophysical Journal Letters. Link
  5. Fairhurst, S. et al. (2025). Study: Pair of Distinct Black Hole Mergers Reveals Clues on How They Form and Evolve. UNLV News. Link
  6. Max Planck Institute for Gravitational Physics (n.d.). Towards a Deeper Understanding of Black Hole Origins. AEI. Link

Two-Factor Authentication [2026]


Two-Factor Authentication: What It Is and Why It Protects You

If you’ve ever received a text message with a six-digit code after entering your password, you’ve already experienced two-factor authentication in action. But most of us treat it as an inconvenience rather than understanding the critical shield it creates between our digital identity and potential attackers. In my years teaching both digital literacy and personal security practices, I’ve watched professionals routinely skip this protection—and I’ve also seen the real consequences when they don’t use it.

Related: digital note-taking guide

The truth is stark: passwords alone are no longer sufficient for protecting accounts that matter. A single compromised password can lead to identity theft, financial loss, and compromised professional accounts. Two-factor authentication remains one of the most effective defenses available to ordinary people, yet adoption rates among knowledge workers remain surprisingly low. This article breaks down what two-factor authentication is, how it works, and why adding this layer of security should be non-negotiable for anyone managing sensitive personal or professional information.

Understanding the Fundamentals of Two-Factor Authentication

At its core, two-factor authentication (often abbreviated as 2FA) is a straightforward concept: to access an account, you must provide two different types of evidence that you are who you claim to be. The first factor is typically something you know—your password. The second factor is something you have or something you are.

This two-step verification process addresses a fundamental vulnerability in password-based security. A password is just information. Once someone has that information—whether through phishing, data breaches, or keylogging malware—they can access your account. But with two-factor authentication, having your password isn’t enough. An attacker would also need possession of your phone, access to your email, knowledge of your biometric data, or control of your authentication app.

The mathematical protection here is elegant: instead of defending against a single point of failure, you create redundancy. This is why information security professionals universally recommend two-factor authentication for any account containing sensitive information. According to research from the National Institute of Standards and Technology, accounts using multi-factor authentication are substantially harder to compromise, even when passwords are weak (NIST, 2017).

The Three Main Types of Two-Factor Authentication

Not all second factors are created equal. Understanding the different types of two-factor authentication helps you choose the most secure option available for each account.

SMS and Email-Based Authentication

This is the most common type you’ve likely encountered. After entering your password, you receive a time-limited code via text message or email. You then enter this code to complete login. The advantage is accessibility—everyone has a phone number or email address. The disadvantage is that these channels can be compromised through SIM swapping, where an attacker convinces your mobile carrier to transfer your phone number to their device.

While SMS-based two-factor authentication is better than no second factor, security researchers increasingly recommend moving beyond it if possible (Grassi et al., 2017). Email-based codes are slightly more secure since email accounts themselves typically have security protections, but they’re slower to deliver and require switching applications.

Authenticator Apps

Applications like Google Authenticator, Microsoft Authenticator, or Authy generate time-based codes that update every 30 seconds. Because these codes are generated locally on your device and not transmitted over networks, they’re more secure than SMS. An attacker would need physical access to your phone to compromise them. This is why security professionals strongly prefer authenticator apps as the second factor for high-value accounts.

When you set up an authenticator app, you scan a QR code that contains a shared secret between the service and your app. Your device then generates matching codes independently. This means the service never transmits codes to you—they’re calculated on both ends using the same algorithm.

Biometric and Hardware Authentication

The most sophisticated forms of two-factor authentication use something you are (fingerprint, face recognition) or something physical you possess (security keys like YubiKey). Biometric authentication leverages your unique biological markers—your device verifies these directly without transmitting them. Hardware security keys are small physical devices that generate cryptographic credentials; they’re nearly impossible to phish because they’re designed to verify the actual website you’re logging into. [2]

These methods offer the highest security but require devices that not all services support. However, for critical accounts—email, banking, cryptocurrency—hardware authentication keys represent the gold standard in two-factor authentication security. [1]

Why Your Passwords Alone Have Already Failed

Before diving deeper into implementation, it’s worth understanding why passwords have become an insufficient security mechanism. The average knowledge worker manages dozens of online accounts. Studies show people either reuse passwords across sites or create weak passwords they can remember. When a service suffers a data breach—which happens constantly—attackers gain not just passwords, but often usernames, sometimes recovery emails, and metadata about your account.

In my experience teaching cybersecurity basics, I’ve found that many professionals drastically underestimate how often their data appears in breaches. You can check yourself at haveibeenpwned.com, a service that lets you search if your email has appeared in known breaches. Most working professionals are surprised to find their credentials in multiple datasets.

The problem escalates when attackers use automated tools to test compromised credentials against popular services. Even if you use a strong, unique password for your email account, if that password was exposed in a breach of an unrelated service, attackers will try it everywhere. Two-factor authentication stops these credential-stuffing attacks cold. The attacker has your password but not your phone, not your authenticator app, not your security key.

The Practical Implementation of Two-Factor Authentication

Understanding the theory is one thing; actually implementing two-factor authentication across your digital life is another. I recommend approaching this systematically, starting with your highest-value accounts.

Prioritize Your Most Critical Accounts

Not all accounts are equally important. Your email account is the master key to your digital identity—it’s how you reset passwords for virtually everything else. Your email absolutely needs strong two-factor authentication. Similarly, banking, investment, and cryptocurrency accounts should have the strongest form of authentication available.

Social media, streaming services, and other convenience accounts can use weaker forms of two-factor authentication since the damage from compromise is lower. But the accounts that control access to sensitive information or financial assets deserve your best protection.

Set Up Your Authenticator App

Download a reputable authenticator app (Google Authenticator, Microsoft Authenticator, or Authy are widely recommended). When setting up two-factor authentication on a service, look for the option that offers “authenticator app” or “time-based one-time password.” You’ll scan a QR code, and your app will immediately start generating codes.

Here’s a critical step many people skip: write down or securely store the backup codes the service provides. If you lose your phone, these backup codes are your only way to regain access. Treat them like you’d treat a physical key to a safe deposit box—store them securely, separate from your phone.

Consider a Hardware Security Key

For your most valuable accounts, a hardware security key like a YubiKey (around $50) offers unmatched security. These work with Gmail, Microsoft, GitHub, and an expanding list of major services. When logging in, you simply touch the key after entering your password. The key performs cryptographic verification directly with the service—no codes to intercept, no apps to compromise.

The investment in a hardware security key pays dividends across any accounts that support it. Unlike SMS or apps, hardware keys cannot be phished or compromised remotely.

Common Misconceptions About Two-Factor Authentication

I frequently encounter resistance to two-factor authentication based on misconceptions. Let me address the most common ones.

“It’s inconvenient.” Yes, it adds a few seconds to login. But this brief friction is precisely why it’s effective—it creates a barrier that deters casual attacks. Once set up, the inconvenience fades as you establish routines. And considering the alternative is potential account compromise, the inconvenience is trivial.

“I don’t have anything worth protecting.” Everyone underestimates what they have worth protecting until it’s compromised. Email accounts are worth protecting because they’re the master key to password resets. Social media accounts are worth protecting because of identity theft and impersonation. Adopting two-factor authentication isn’t about paranoia—it’s about basic operational security.

“If I lose my phone, I’ll be locked out.” Every legitimate two-factor authentication system provides backup codes for exactly this scenario. Keep these codes safe, and you’ll never be permanently locked out.

“Biometric authentication isn’t secure.” While biometrics can be spoofed in laboratory conditions, they’re secure in practice because they’re verified locally on your device, not transmitted across networks. Biometric two-factor authentication adds substantial real-world security.

Building a Sustainable Two-Factor Authentication Strategy

The most secure approach isn’t always the most practical for every account. I recommend a tiered strategy:

Your Next Steps

Last updated: 2026-05-11

About the Author

Published by Rational Growth. Our health, psychology, education, and investing content is reviewed against primary sources, clinical guidance where relevant, and real-world testing. See our editorial standards for sourcing and update practices.

References

  1. Farnung, J., Slobodyanyuk, E., Wang, P. Y., Blodgett, L. W., Lin, D. H., von Gronau, S., Schulman, B. A., & Bartel, D. P. (2026). The E3 ubiquitin ligase mechanism specifying targeted microRNA degradation. Nature. Link
  2. Mehra, T. (2025). The Critical Role of Two-Factor Authentication (2FA) in Mitigating Ransomware and Securing Backup, Recovery, and Storage Systems. International Journal of Science and Research Archive, 14(01), 274-277. Link

What Is Quantum Computing and Will It Change Everything?


What Is Quantum Computing and Will It Change Everything?

If you’ve been scrolling through tech news lately, you’ve probably encountered the term “quantum computing” thrown around with the kind of breathless excitement usually reserved for the next iPhone. But here’s the honest truth: most coverage of quantum computing oversells the present while underselling the actual potential. As someone who’s spent time unpacking both the hype and the reality behind emerging technologies, I’ve learned that understanding quantum computing starts with ditching the mythology and getting clear on what these machines actually do—and what they can’t do (yet).

Related: solar system guide

The fundamental question we need to answer is straightforward: what is quantum computing, and should you care about it? The answer depends on your field, your timeline, and your tolerance for uncomfortable uncertainty. Let me walk you through both the mechanics and the implications, drawing on current research and expert consensus rather than speculation.

The Classical Computer’s Limitation

Before diving into quantum computing, we need to understand why it exists. Classical computers—the ones you’re using right now—process information using bits. A bit is binary: it’s either a 0 or a 1, like a light switch that’s either off or on. Everything your laptop does, from rendering this text to encrypting your passwords, boils down to manipulating billions of these simple on-off switches at incredible speed.

This approach has gotten us far. Moore’s Law, which observes that the number of transistors on a chip doubles roughly every two years, has held true for decades. But we’re hitting a wall. Transistors are now so small (measured in nanometers) that quantum effects themselves start interfering with classical logic. More fundamentally, certain types of problems grow exponentially harder as they scale up. A classical computer trying to factor a 2,048-bit number—the kind used in modern encryption—would take thousands of years (Shor, 1997). This isn’t a speed problem; it’s a fundamental structural one.

Understanding Quantum Computing: The Basics

So what is quantum computing, exactly? Instead of bits, quantum computers use qubits (quantum bits). And here’s where things get genuinely strange: a qubit can exist in a state called superposition, meaning it can be 0, 1, or both simultaneously until you measure it. Think of it like a coin spinning in the air—it’s neither heads nor tails until it lands.

This superposition property is powerful. While a classical computer with 3 bits can represent one of eight possible values at any given moment (0-7), 3 qubits can represent all eight values at the same time. Scale this up to 300 qubits, and you’re theoretically representing more simultaneous states than there are atoms in the observable universe. That’s the promise of quantum computing: exploring vast solution spaces in parallel.

But there’s a catch—actually, several. When you measure a qubit to get your answer, the superposition collapses to either 0 or 1. The art of quantum computing lies in designing algorithms that amplify the probability of the right answer while canceling out the wrong ones through something called quantum interference. It’s less like a computer and more like a specialized problem-solving tool with exacting requirements.

Another quantum property, entanglement, adds another layer of power. Entangled qubits are mysteriously correlated—measuring one instantly influences the others, regardless of distance. This allows quantum computers to process information in ways that feel counterintuitive to anyone trained on classical logic, but You need to their computational advantage (IBM Research, 2023). [3]

Current State of Quantum Computing Technology

We’re currently in what researchers call the “NISQ era”—Noisy Intermediate-Scale Quantum. This is not a sign of failure; it’s reality. Today’s quantum computers, offered by companies like IBM, Google, and others, have between 50 and a few hundred qubits. These machines are extremely sensitive and error-prone. Qubits can decohere (lose their quantum state) in microseconds. Environmental vibrations, electromagnetic interference, even stray cosmic rays can flip bits. The error rates are significant enough that early quantum computers are more interesting proof-of-concepts than practical tools for real-world problems. [1]

Google made headlines in 2019 claiming “quantum supremacy”—performing a calculation in 200 seconds that would take a classical computer 10,000 years. The reality was more nuanced: the problem they solved had no practical application and was specifically designed to showcase quantum advantage (Arute et al., 2019). This matters because it highlights the gap between theoretical quantum computing and practical utility. [2]

That said, progress is accelerating. IBM’s roadmap targets systems with over 4,000 qubits by 2025. Companies are experimenting with different qubit architectures—superconducting qubits, trapped ions, topological qubits—each with trade-offs in stability, error rates, and scalability. The field is genuinely moving forward, even if quantum computing remains inaccessible to most organizations. [4]

Where Quantum Computing Will Actually Matter

So what is quantum computing actually good for? Not everything. This is crucial: quantum computers won’t replace your laptop or smartphone. They won’t improve general-purpose computing. But in specific domains, they could be transformative. [5]

Drug Discovery and Materials Science represents perhaps the most immediate application. Modeling how molecules interact with disease targets or designing new materials with specific properties requires simulating quantum systems—which quantum computers do naturally. Pharmaceutical companies and materials scientists are already experimenting with quantum simulators to accelerate development timelines.

Optimization Problems are another sweet spot. Supply chain optimization, portfolio optimization for finance, traffic flow management—these are problems with astronomically large solution spaces. Classical computers use heuristics and approximations; quantum computers might find better solutions faster. Financial institutions are actively exploring quantum algorithms for this reason.

Machine Learning integration is an emerging frontier. Certain quantum algorithms might accelerate specific machine learning tasks, particularly in pattern recognition and feature analysis. However, whether quantum advantage will materialize here remains unclear—the hype has often outpaced evidence (Preskill, 2018).

Cryptography is where quantum computing poses both a threat and an opportunity. Quantum computers could break current encryption methods, which is why governments and security agencies worldwide are developing “quantum-resistant” cryptography. Simultaneously, quantum key distribution offers theoretically unbreakable encryption. This is genuinely urgent: adversaries are likely harvesting encrypted data now to decrypt later when quantum computers become available.

The Timeline: When Will Quantum Computing Matter to You?

Here’s where honesty matters. If you’re not working in cryptography, pharmaceutical research, or advanced materials science, quantum computing probably won’t directly affect your work in the next 5-10 years. We’re still in the research and development phase. Full-stack quantum computers with error rates low enough for general-purpose computing remain years away—likely a decade or more.

However, this doesn’t mean you should ignore quantum computing. Awareness now positions you better for a future where quantum-classical hybrid systems become standard tools in certain industries. If you work in data science, finance, or any field involving complex optimization, becoming familiar with quantum principles and algorithms now means you won’t be blindsided later.

The practical reality for most knowledge workers is this: quantum computing is coming, but incrementally. It will arrive first as cloud-accessible services from companies like IBM and Amazon, available to those who need them. Organizations will gradually integrate quantum solvers into workflows for specific bottleneck problems. This evolution will likely take 10-15 years to mature into the “quantum will change everything” narrative you hear today.

The Hype Versus the Reality

Tech hype cycles follow a predictable pattern: initial excitement, disillusionment when reality doesn’t match the dream, then gradual progress as serious researchers do the grinding work. Quantum computing is currently in the excitement phase, with healthy doses of disillusionment creeping in among informed observers.

The challenge is separating real potential from marketing. When a startup claims their quantum algorithm will revolutionize your industry, ask specific questions: What problem does it solve? What’s the time horizon? What evidence supports the claim? The answers will likely reveal the marketing gloss.

That said, dismiss quantum computing at your peril. The fundamental science is sound. The investment is genuine—billions of dollars from governments, tech companies, and venture capital. And the problems it could solve are genuinely important. This isn’t cold fusion or perpetual motion; it’s physics and mathematics working exactly as predicted, just hitting the messy constraints of engineering reality.

What You Should Do Now

If you want to stay ahead of the curve without getting lost in technical jargon, here’s a practical roadmap:

Last updated: 2026-05-11

About the Author

Published by Rational Growth. Our health, psychology, education, and investing content is reviewed against primary sources, clinical guidance where relevant, and real-world testing. See our editorial standards for sourcing and update practices.


Your Next Steps

  • Today: Pick one idea from this article and try it before bed tonight.
  • This week: Track your results for 5 days — even a simple notes app works.
  • Next 30 days: Review what worked, drop what didn’t, and build your personal system.

References

  1. Preskill, J. (2018). Quantum Computing in the NISQ era and beyond. Quantum. Link
  2. Moussa, O. et al. (2024). Quantum Computing: Foundations, Architecture and Applications. Engineering Reports. Link
  3. Alqahtani, H. et al. (2024). Quantum Computing: Vision and Challenges. arXiv preprint arXiv:2403.02240. Link
  4. National Science Foundation (2024). Quantum computing: Expanding what’s possible. NSF Science Matters. Link
  5. National Academies of Sciences, Engineering, and Medicine (2019). Quantum Computing: Progress and Prospects. National Academies Press. Link
  6. Oliver, W. (2024). Quantum computing reality check: What business needs to know now. MIT Sloan Ideas Made to Matter. Link