If there's one thing you should know about me, it's that I'm a chronic overthinker who doesn't like to think too far into the future. Yes, it sounds counterintuitive. After all, overthinkers like me often ruminate excessively, live by to-do lists, and plan out as much of our lives as we can.
But me? That's not my jam. I'm the epitome of the Sylvia Plath fig tree analogy. I'll think over and over and over about potential life paths and potential versions of me: I could be a teacher in South East Asia! I could quit my job, travel the world, and write a feminist post-apocalyptic novel! I could become a career woman, climb to the top of the ladder, and enjoy a carefully curated apartment filled with cute rugs, knick-knacks, and fun lamps! (Imagine the ambience!). But if you asked me to actually choose any of these potential lives, I'd be stumped. A five-year plan? Absolutely not. Ten concurrent hypothetical five-year plans? That's more like it.
In my experience, excessive imagining about these potential lives has meant that I'm plagued with decision paralysis, unable to make a decision because, well, what if it's the wrong one? I've chatted with my manager about my career aspirations, I've spoken with friends and family about hypothetical places I could move to, and I've ranted to my therapist about my love life. Yet I still cannot make a decision about what life I should live.
So, like any sane person, I enlisted the help of a robot. Specifically, ChatGPT.
Yes, it's gotten a bad rap as of late. Something about stealing jobs, students using it to write essays, and being biased. But despite these shortfalls, people have started coming up with extremely clever ways to utilise the hot little artificial intelligence on the block. Some have plugged in job descriptions and asked ChatGPT to rewrite their entire resume (quite successfully, I may add). Others have had ChatGPT write out dream meal plans and shopping lists based on their health goals and what they have in the fridge (clever). People have even had ChatGPT write out entire morning routines to become 'that girl'. So it seems only sensible that I entrusted the creation of a five-year life plan (my first ever) to a robot. (Don't tell my therapist, please).
The first step was sharing a few details about myself with ChatGPT, because how can you expect a bot to plan out your next five years if they don't know what your favourite band is? I tried to keep it as concise (and realistic) as possible. In the future, I'd like to have a cool editor role. I'd like to move countries (although, unsurprisingly, I have no idea where). I'd like to start dating again and maybe find a long-term partner. Then a couple of obligatory requests: I want to travel, take care of my mental health, walk more, save money — the usual.
Initially, ChatGPT gave me a bit of an average answer. "Focus on your career goals", it said. "Prioritise mental health by practicing mindfulness daily". Okay. "Consider moving to a new country that aligns with your desired lifestyle". That's all well and good, but, like, where am I moving to? It was essentially a regurgitation of what I had already said in my prompt. It was telling me things I know I should do (meditation), but will realistically never action unless someone forced me to (also meditation). A lacklustre beginning, but I was not about to give up hope.
"Okay, can we dig into the year-one plan a bit more?", I wrote. "What city should I move to?" I plugged in some very specific requirements — a city with a focus on arts and culture, a good live music scene, affordable, and with plenty of hiking opportunities. ChatGPT spat out five cities that it thought I would like, complete with its associated Australian visa application requirements. Wellington, Vancouver, Reykjavik, Lisbon, and even Melbourne were good options for me, according to my bot friend. But I could feel the anxiety reemerging, the fig dangling in front of my face, imploring me not to make a decision. Too many options! Too many possibilities! Too many potential lives!
While suggesting a bunch of cities that might be my cup of tea was one thing, the practicality of actually choosing one was another beast. I could feel my anxiety rising and my overthinking monster rearing its ugly head. Sometimes known as 'analysis paralysis', the paradox of choice is a term used for when we're presented with so many options that we just become overwhelmed and unable to make a decision. It's something I've experienced my whole life — and unfortunately, if you spend all your time thinking about where to move to, you probably won't even move.
For the last few months, every family member and friend has become victim to my incessant worrying about what my future will look like, especially when it comes to the decision of where I should move to. "They all sound like good options," my best friend says. "Canada is fun," says another. "What about living in Broome?", my mum would suggest for some unknown reason. With so many hypotheticals, I just need someone to make the decision for me. And I wasn't about to let ChatGPT become another person encouraging me to make my own decisions — I wanted it to choose for me.
After a brief altercation following ChatGPT's suggestion that I move to Melbourne (I will not be dating any more Australians at this time, thanks), it apologised and finally gave me a concrete answer — something I've been craving for so long. Berlin, Germany. Let me tell you, the bot went hard on the sell, praising Berlin for its thriving music scene, vibrant arts and culture scene, affordability, and a plethora of outdoor spaces. ChatGPT did what my friends, family, and even I couldn't do for myself — it made a solid, educated decision based exactly on what I was looking for.
Then it was time to confront another key element of ChatGPT's five-year plan for me — my (abysmal) dating life. As part of the first year of my five-year plan, ChatGPT suggested I explore the dating scene by joining online dating platforms and social clubs, as well as have a friend introduce me to someone new (uh, not a chance). Instead of candidly accepting its rather generic and stock-standard advice about my love life, I decided to get vulnerable, confessing that I didn't feel like I was ready to date just yet. And instead of giving me another set of generic answers, it actually comforted me.
"If you don't feel ready to go on dates yet, that's okay!" the robot said. "Everyone moves at their own pace when it comes to dating and relationships." Instead of suggesting I push through, ChatGPT said I should focus on my personal growth by taking up a hobby or pursuing a new interest. It said that I should dedicate more of my time to cultivating meaningful friendships. And yes, it even told me to go back to therapy. "Take your time, be kind to yourself, and focus on building a life that brings you joy and fulfilment," it assured me.
I was starting to realise that, while ChatGPT's advice was rather stock-standard at first, it also understood what a human needs — in a weird, robotic way. Through every message, it was learning about who I was and adjusting its answers to show what I needed to hear. It's one thing to have your friends give that kind of supportive advice, but it's a completely different ballgame to have a literal computer tell you that you need to be kinder to yourself.
"So, how would I launch my career in Berlin if I were to move there?", I asked, desperate for concrete answers to soothe the anxious bug in my head. I had expected generic advice like 'network' or 'research media companies', which, admittedly, it did give me at first. But after some prodding, ChatGPT gave me specific media companies to look into, how much money I could expect to make, and the average rent for popular neighbourhoods. Hell, it even gave me career suggestions based on my current role and my paycheque, as well as tax regulations. Concrete information. Concrete decisions. At last.
In a way, ChatGPT made me confront the analysis paralysis that I've been plagued with for years. While my loved ones have often tried to be supportive in helping me decide which Sylvia Plath fig to eat (or what life path to follow), ultimately, the decision has remained in my own hands — they've just been there as supporting cast members, telling me I can do anything I want. While ChatGPT too initially started out that way, by giving me a plethora of options to choose from, it ultimately gave me a solid answer to my problems — something no person has been able to give me.
There's a strange feeling when you realise a computer can help a human make better decisions than the human was able to make herself. That a computer could 'understand' my anxious thoughts and varied interests and very specific criteria for new cities and understand where I'd feel most at home. That artificial intelligence has at least some basic grasp of emotions, constantly assuring me that it's okay that I still have a bit of fear around dating, or that I just need to focus on myself. That, if pushed to, it can single-handedly eliminate all decision fatigue by just giving me one answer. No grey areas.
After over an hour of talking with each other, I asked ChatGPT what my life will look like five years in the future. According to my new robot friend, I'll be a senior editor at a publication in Berlin, London or Vancouver. I'll be fluent in the local language and have a group of friends from different backgrounds. I'll have a healthy savings account. I'll have a consistent routine for exercise and self-care, and will feel more energised and resilient. I will have found a long-term partner and be building my life together with them (or I might be just comfortable with the idea of dating).
I'll be happy.
And I'll have finally have eaten that damn fig.