The Illusion of Insight: AI Journalling Applications
AI journalling applications aim to transform the way we engage in self-reflection and personal growth. Leveraging AI chatbots such as ChatGPT-4 , these applications provide “analysis” of one’s journalling, frameworks to work with, access to chatbots trained on philosophers, allowing you to engage in “coaching” with, say, Aristotle or Alan Watts, and generative images created from your writing. I’ve tested one such application, Mindsera, and browsed a few others. I’m left with a feeling of mild amusement and a wider disquiet.
Let’s dive into my relatively brief experience. There are a few experiences from which you can choose, the first of which is simple free writing and the choice to “analyse” it once you’re done. My first experiment with this was a short piece of writing which produced a summary that was very little other than my own words rephrased. The bot tried to summarise my emotional state with percentages of moods, which felt quite random. The “Personality” analysis felt like another recap but with nice-enough mirroring language, and the “Suggestions” read like “Self-Care 101”. I had no more satisfaction with a longer entry a few days later. This platform also produces an AI image based on the entry, which I find nice enough to look at, but just odd. For me, this kind of “aided self-reflection” is useless, and more widely, concerning, which I’ll delve into later.
I then tried “Infinite Prompts” which I actually found somewhat useful. For me, this is where some of the coaching apps could be a useful companion- a sort of “Level 1-2” work for people to easily access. With this part of the app, you simply start writing and the bot responds with questions. By the fourth question, it was asking “How do you navigate the tension between your desire for nurturing and your fear of being seen as needy in a world that often stigmatizes vulnerability?”. I felt this was actually a good question, one which I might ask in a session, and relevant to what I’d written.
I also tried using Alan Watts as a chat companion, which was worth the few minutes I spent on it - Philosophers on tap, if you will. And finally, as I was writing this, I delved into “Frameworks” which are potted summaries of thinking models and the opportunity to work through an issue or thoughts using one of those frameworks. Again, I can see the utility, particularly in moments where one feels stuck.
In summary, I can definitely see some utility of these apps. And it’s always on my mind that the first reactions of the majority to new, potentially threatening, technologies are derision, dismissal, and defensiveness. Equally, I want to explore some of my disquiet about this.
On a surface level, there are questions of privacy. Time and time again, tech companies show themselves to be poor custodians of private information. Mindsera has a good privacy statement https://help.mindsera.com/privacy and I would assume most of this sort of self-improvement will have something similar. That doesn’t change the fact that it’s difficult for a layperson to understand the various layers their information goes through - ISP, Browser, App, their device itself. In an era where countries and states are lurching towards making various identities illegal, paper and pen are probably a safer, wiser choice for deep exploration.
Speaking of paper and pen, this was one of my first blockers to using this app. My practice is to journal first thing in the morning, before engaging with my phone or a computer. Writing, physically, has been shown to engage different parts of the brain than typing. It’s a process that genuinely helps self-reflection, which is why I’m always recommending physical journalling to my clients. For me, engaging with the computer, even a blank screen to write on, first thing in the morning, felt really disruptive. I think this speaks to broader questions society is still tangling with - how much time in front of a screen is useful, what are the impacts of greater screen time, and how much time do we WANT to be spending with technology? I ended up thinking of this as a question of utility - do I really NEED an AI journal, or is it actually diminishing my experience - and the answer for me was definitely diminishment.
The word “diminishment” leads me to the word “erosion”, which encapsulates the crux of my worries about AI self-improvement. As we know from collapsing coastlines and houses falling into the sea, erosion can be a big deal! What I worry about is the acceleration of the erosion of a lot of core human skills - discernment, access to emotions, tolerance (and even enjoyment of) ambiguity. I name this “acceleration of” because we really can’t deny that the firehose of content the majority of us consume has changed our discourse, our abilities to relate, and our self-perceptions, mostly for the worst. I’m concerned that so-called self examination with a machine will lead to increases in narcissism (the machine says I’m right, or this is a positive direction, so it is), neurosis (the opposite - the machine says this entry is 75% anxious…I knew it, I have anxiety!). I’m concerned that the perception of “answers” to our greatest human dilemmas reduces (further) our ability to navigate all the dark, grey, murky corners of our existence. I’m concerned that receiving emotional reflection from a screen takes away the energy and nuance of being with another human, and that those aspects of relationality may become ever more difficult for some individuals to achieve. I’m above all concerned that for decades, we’ve lost the art of teaching discernment to our kids, leading to adults who don’t recognise genuine sentiment, worthy insight…or even decent, logical writing and arguments. So this bare minimum, when a machine rephrases what you’ve written and gives you a metaphorical thumbs up for your “insight” will be taken as a valuable endorsement, when in fact, it’s just more baby food for the brain.
The echoing voice in my head for these past few months says “Is this REALLY where we want to go?”, but it’s coupled with “Are we smart enough to realise what we don’t want and act in accordance with that wish?” I’m afraid the answer may be no - both societally and even biologically, ease, convenience, material gain are things we simply go for. A rather sombre note to end an article on, but that’s where I’m at.