Museums can be nice. I liked the Montréal Museum of Fine Art very much. But I think I miss getting the metro more. It put London Underground to shame. As did the chips. And I suppose I miss poutine. And Christmas spent alone in an equally lonely city, an island of nearly two million missing for a day. And I miss how easy it was to buy the homeless Tim Hortons on the pound when it would have been inexcusable, at that price, to miss an old man in the cold smile custard into his beard, somewhere between the empty hands of a clock. And St Joseph’s Oratory, of course, of course, with its ceilings higher than all hopes, where I indulged that I might be having a (very) late religious experience before admitting it was altocelarophobia. And I miss Mount Royal and that tableau in the tundra, etched from the summit, where Leonard Cohen’s face peered over the frozen city. Yes. Museums can be nice.
Maybe life is bad, or is bad enough, or risky enough, or uncertain enough, or scary enough, or unknown enough to make procreation almost always wrong. Maybe the person who does not appreciate life, particularly her own, will be your child. Maybe making the most important and far reaching decision on behalf of another person is something we should not do, if we can help it…
Rivka Weinberg, ‘Is Procreation (Almost) Always Wrong?’ , in The Risk of a Lifetime: How, When, and Why Procreation May Be Permissible (Oxford: OUP, 2015), pp.120-152 (p.152).
Whilst antinatalism’s most prominent advocates and scholars can be accurately described as possessing a morbid or pessimistic mien which, understandably, upsets and drives away most audiences, I hope this article, in delineating several antinatalist stances and my own nuanced view, will move away from this. Nevertheless, I am acutely aware of both the sensitivities encompassing this subject and of offending those sensitivities which I am not. Consequently, I find that a brief apologia is in order. As William Godwin wrote, beautifully, in his Enquiry Concerning Political Justice (2015 ):
…ideas are to the mind nearly what atoms are to the body. The whole mass is in a perpetual flux: nothing is stable and permanent; after the lapse of a given period not a single particle probably remains the same. Who knows not that in the course of a human life the character of the individual frequently undergoes two or three revolutions of its fundamental stamina? (41)
This framing of what follows should not be mistaken for a lack of conviction. I cite Godwin simply to state that what I now write I write in the transience of the present moment as one cognizant of my forthcoming ‘revolutions’. Despite first coming across antinatalism five years ago, reflecting upon it frequently since then, and now believing it is a philosophy I will never abandon, I’m nevertheless aware that change is not only possible but an inevitability. As the Italian novelist Elena Ferrante (2018) said, ‘what we were at the beginning is only a vague patch of colour contemplated from the edge of what we have become’, and what I write now is exactly that: the product of a moment, driven to broaden and deepen both my own understanding and that of others with no intent to offend.
Antinatalism: What is it?
To some extent, antinatalism is self-evident. The OED defines the prefix ‘anti-‘ as ‘opposite, against, in exchange, instead’ and ‘natal’ as ‘of or relating to a person’s birth‘. Antinatalism, then, can be summarised as a position that advocates or personally adopts an opposition to procreation. The less immediately obvious facet of antinatalism is its moral locus: an understanding of existence wholly grounded in the conviction that procreation is a harmful act or, at the very least, a potentially harmful actthat renders procreation unconscionable. Procreation, from this perspective, is affixed an irrevocably negative moral value. To be born is a harmful event and entails immense suffering, which antinatalists find unconscionable enough to decide that procreation is morally wrong. From this relatively uniform starting position, however, divergences begin to emerge. For example, many antinatalists hold the attendant view that their own birth is a negative event, such as frontman of The Cure Robert Smith. Whilst not identifying explicitly as an antinatalist, it is difficult not to view Smith’s comments as either wholly sympathetic to antinatalism or, at least, existing in chirality:
I’ve never regretted not having children. My mindset in that regard has been constant. I objected to being born, and I refuse to impose life on someone else. Living, it’s awful for me. I can’t on one hand argue the futility of life and the pointlessness of existence and have a family. It doesn’t sit comfortably… I enjoy myself hugely… but you know, it’s despite myself, really. (Pattison 2011).
Something Smith’s (2011) self-reflexive account doesn’t touch upon is how antinatalism is predominately understood in opposition to the consequentialist, liberal philosophy of utilitarianism; that branch of philosophy created by Jeremy Bentham, and popularly espoused by his intellectual descendant John Stuart Mill, which was deeply concerned with social idealism. In short, utilitarianism insists that all possible actions a person undertakes must seek to maximise utility. What constitutes ‘utility’ has undergone various different definitions according to the philosopher discussing it. But antinatalism is often situated against a saliently Benthamite notion of utility, which is defined by the sum result of all possible pleasure that results from an action. That is to say, utilitarianism dictates that all our actions should be orientated towards ensuring the maximum amount of pleasure or happiness for the majorityof the population. Contradistinctively, antinatalists are often driven by what one may call a negative utilitarianism: the conviction that ameliorating and minimising the possible or real suffering of otherscarries greater moral importance than maximising their relative pleasure or happiness. It is this tenet that most attracts me to antinatalism, and one may already see why negative utilitarianism readily lends itself to antinatalism.
At this juncture, however, two observations are particularly striking. Firstly, as the words ‘pleasure’, ‘happiness’, and ‘suffering’ suggest, this is a deeply experiential and subjectively orientated area of philosophy that, as Rivka Weinberg (2015) observes, elicits ‘no authoritative answer’ (129). Secondly, as aforementioned, to those unfamiliar with the term, antinatalism may sound like a deeply pessimistic, ‘dark view’ of existence that places undue precedence on pain over pleasure (Weinberg 2015: 120). Refusing to have children will already appear to most as an outlandish suggestion, let alone the added, seemingly acerbic assertion that to procreate is a cruel, unconscionable attack. However, as I will argue, whilst antinatalism has certainly been adopted by a decidedly morbid minority, it is not a view universally espoused by the chronically angst-ridden, nor one in which pain’s place as the principle motor of one’s actions is merely intellectualised melodrama. Pain is important – but antinatalism is more than that. But to address this we must ask why anybody is an antinatalist in the first place.
I began this article by mentioning that the birthrate in England and Wales has fallen to its lowest level since records began over 80 years ago. Whilst the causes of this are manifold, such as declining fertility rates, Professor Ann Berrington (2019) does briefly touch on a far more interesting cause: the lack of affordable housing. This is an unsurprising comment, although it deserves far greater attention. A recent paper by Donald Hirsch (2017) and the Child Poverty Action Group offer a grim examination of Britain’s dire and increasing impoverishment, which is actively affecting birthrates. But, of course, we have been here before over a century ago, when a complete dirth of social security in regards to pensions, housing, education, and healthcare was believed to be the major cause of declining birthrates (Chris Renwick, 2018: Ch.4). People simply didn’t want to have children and, as Sidney Webb wrote in his Decline of the Birth-Rate (1907), it was absolutely essential for the country to restore the confidence of the population in the security of the future, in order for them to procreate and produce the social reformers that it would require. The differing responses from across the contemporary political spectrum led to legislators and the literati to, eventually and gradually, produce the welfare state we in the UK know today. Constructed from what Alfred Marshall would aptly call ‘mixed [political] parentage’, the welfare state acts as a holistic form of social security that is nevertheless in constant redefinition and reiteration, satisfying nobody, completely, and yet delivering ‘a level of national unity on social policy that had never been tried before – or, indeed, since’ (Renwick 2017: 264). It is unsurprising that, thereafter, a baby-boom blossomed.
Writing today, I wholeheartedly believe that the current socioeconomic climate is playing a much larger role in people’s decision not to have children (much like it did over a century ago). Yet, despite Professor Philip Alston’s recent UN report on the sheer scale of Britain’s widespread and debilitating poverty, willingly enacted by the Westminster political class, I remain quietly and persistently optimistic. As the apocrophal Mark Twain quote goes, ‘History does not repeat itself, but it rhymes’, and this significant decline in birthrates will ultimately lead to the state having to respond in some way amenable to procreation – which will mean concessions from the neoliberal status quo. Yet, this societal reticence towards having children cannot be considered tantamount to a rejection of the principle of procreation – that is to say, it isn’t self-consciously antinatalist. But it is indicative of an inherent anxiety and capacity in people to perceive human suffering in the future, despite all evidence to the contrary – with antinatalism itself not necessarily being an unthinkable position. It is a position and question that fully merits our attention here today, and one which the world would undoubtedly benefit from if it were adopted by a larger minority. However, whilst these material realities are certainly a factor in my own antinatalism, as aforementioned, they are not the defining motivation for my interest in antinatalism. Rather, it is the discussions within academia, and its explorations of the more abstruse, existential reasoning behind why some people are antinatalists that I have found compelling.
One answer to why some people are antinatalists is the subjective conviction that life is bad. As Rivka Weinberg’s (2015) essay begins, ‘If my gut is right, having children, is probably almost always wrong because, if life is bad, then we are putting people into a bad situation by creating them’ (120). But as Weinberg continues, we can’t make the binary distinction of a ‘good’ and ‘bad’ life on experiential grounds universally applicable, and it is this experiential ground that dominates in the formation of an antinatalist perception of life, rather than ‘many other sorts of value… moral, aesthetic, scientific and so on’ (121-22). One may argue, though, that it is unclear how the moral and the experiential are cleanly separable fields, let alone which would then hold greater influence in a given person’s life.
Subsequently, Weinberg (2015) focuses on the suitable conditions of procreation from the predominantly experiential vantage point, before tackling another recurrent bugbear among antinatalists: gaining consent of the procreated. Weinberg (2015) identifies several conditions that must be satisfied for procreation to be permissible:
WE HAVE A VERY STRONG INTEREST TO DO SO: Parents have to be motivated to have children with the desire to create happy, fulfilled people, rather than driven by ‘manipulative, cruel, or disrespectful’ motivations. (132)
TO MITIGATE DAMAGES: Similar to 1., these new lives must be assisted in coming to terms with the predicament of existence, rendering them confident that life is not objectively ‘bad’ (132).
This evidently informs Weinberg’s (2015) argument that, if the above conditions are met, there is no consent violation when it comes to procreation. Consent violation is significant to antinatalists for fairly self-evident reasons, as Weinberg (2015) adumbrates:
Since procreation imposes the risks of life on a child… we must have actual consent to being procreated. Of course, they [parents/guardians] cannot get their childs actual consent to being procreated. And therein lies the problem. There is no solution” (138).
But Weinberg (2015) proffers a deceptively simple countervailing position: “children do not have autonomy or consent rights because they aren’t competent to exercise them… [they] are incompetent, they require our paternalistic care, and paternalists are allowed… to impose risks upon them for their own benefit, including for their ‘pure benefit'” (138). ‘Pure benefit’, here, is a coinage by Seana Shiffrin (1999) which refers to a benefit that is good but whose absence would not be considered a harm or a deprivation – something which, for Shiffrin, can only be imposed if actual consent is gained from the benefactor (124-126). We can consider birth and existence to be one of those ‘pure benefits’, as Weinberg does in her argumentation, as it isn’t necessary or required but can be good.
But this doesn’t clear up the consent question, nor can one definitively believe existence is good. A child cannot give consent to whether or not they want to be born, but the whole issue of consent is more complicated than that. For example, it is unclear in the prevailing, global socioeconomic system whether children are or are not autonomous beings capable of giving consent. It would prove interesting, for example, to investigate how much revenue children have generated in actively giving their consent to global businesses such as YouTube, and other app/tech companies, during this unprecedented era in which many of them in the Western world have a familiar relationship with tablets and other technological devices from a very early age. They are data factories, generating revenue for big business, and are treated as consenting individuals, as adults, who are willingly surrendering this information from a position of cognizance. If Weinberg is correct, however, and children can’t give consent, whether or not these companies can extract profit from them is another question (I would argue – they shouldn’t). Or are there different types of consent?
Hair-splitting tangents aside, and more pertinently, Weinberg’s (2015) argumentation does not convincingly deal with what I will call the delayed/eventual actualisation of finitude (D/EAF) and consent. By this, I refer to the moment or stage in which a child becomes conscious enough to discover their own mortality and position in the universe – their D/EAF. How does the issue of consent change when a child eventually realises that they exist, when they reach the delayed/eventual actualisation of their finitude and discover some, if not all, the attendant conditions and eventualities that this entails? As aforementioned, according to Weinberg (2015), a parent can produce a child on the grounds of necessitated paternalism and the child’s inability to provide actual consent. But what about when they become adults and could give retroactive consent? When a child reaches their D/EAF, they may reach the conclusion that existence is not a burden worth bearing. Was assuming their consent, thus, a violation? Or is it only a violation in the act of actualisation? How is one expected to maintain a clear conscience knowing all that a child-adult has endured is the result of parental actions?
Moreover, some of the warmer and fuzzier elements of the above arouse suspicion. Much like how Sidney Webb’s (1907) paper argues about the essential need for procreation to produce tomorrow’s socialists, Weinberg’s (2015) conditions for procreation are underpinned by the assumption that children are deliberately produced in order for adults ‘to engage in a mutually beneficial and respectful relationship with them’ (149). Both Webb and Weinberg presume that parents will have (and are driven towards) positive relationships with their children, that the children were deliberately created (‘planned’), and will be able to decidedly shape and inform their characters. This, I believe, is misguided. One of the central battles that will emerge within the filial/parental architecture, almost from day one, is between the child’s autonomy and their parent’s paternalism. Children typically, eventually, break away from the parental architecture in the development of their autonomy, and sometimes subsequently oppose that architecture – children are the product of this dialectic (autonomy v. paternalism), and the autonomy (or the illusion of autonomy) almost always succeeds in the end.
Whilst I am sure a parent can exert influence over a child’s formative years during those formative years, in which the grip of paternalism is typically (though not always) at its strongest, it is entirely possible (and is frequently the result) that a child will develop within that aforementioned dialectic (autonomy v. paternalism), through their environment and filial bonds, in a way that is not predictable, and that cannot be channelled with exactness into a desired form. This is evident when one examines the nature of siblings: siblings can prove to be very different people in their respective characters, motivations, affinities, disunities, and their perspectives on things, things as quotidian as whether or not to wear washing-up gloves all the way up to the bigger questions of purpose, finitude, procreation, meat-consumption, etc. All of these divergences can and do occur despite an identical environment for their formative development and despite a consistent parental presence, a testament to the unknowable autonomy of offspring even during the earlier stages of development. My point here being: we can’t know how a given life and individual will unfold, and we don’t know when, how, and to what extent, a parent or guardian will influence their child’s subjective perception of life, and in what respect(s), and whether that will be conducive to what one may describe as a ‘good’ (or utilitarian) life.
It is this unknowability and the infinite potentiality that I find most disarming, and what drives my own interest in a personal antinatalism. Perhaps Martin Amis gave the best illustration of this anxiety towards the expected-utility hypothesis in his novel Time’s Arrow (1991). Throughout this novel, the reader takes the narrative perspective of a Nazi doctor who is re-experiencing the course of his own life anti-chronologically, starting from his death in his garden in America and ending with his birth in Germany. Throughout the novel, during his sleep, the narrator has recurring nightmares about an anonymous infant, which is portrayed as this being of unparalleled power – representing all the potential actions that an infant-child-adult is yet to and could invoke. One imagines this is the self-reflexive manifestation of the narrators own post-Holocaust guilt. But this image, in general terms, has had a profound impact on my already-existent antinatalist sympathies. Without wishing to be melodramatic or hyperbolic, I believe a clear and indisputable point emerges from Time’s Arrow (1991): we cannot know what will come materially after conception, nor what a child’s subjective experiences will be like, with any great certainty. Procreation simply seems like too much, too much power and potential, and the possibility of pain endured or pain inflicted on others.
Furthermore, one is aware that parents, with the very best of intentions and with all the anticipated alacrity, will nevertheless be thwarted in trying to carve out a child that fits the model – apropos of their own ideological convictions (sub-conscious or otherwise) – about what is ‘good’ and ‘bad’. Moreover, whether or not the parent-child relationship will be the respectful, mutually beneficial enterprise Weinberg (2015) envisions is another, uncertain eventuality – all of which seems far too great a risk when considering how significant it is that we can create life and that this life is essential to maintaining private and collective life. In addition, Weinberg’s musings rest upon a false assumption that I believe is also widespread in society: that people give procreation due thought and consideration, and reflect on their own motivations. Yet in 1998, the year of my own birth, 31% of pregnancies in the United States were categorised as ‘unintended’ (Rachels 2014; Family Planner Perspective 1998). I was unable to find statistics for 1998 from the Office of National Statistics in the UK – I would have fallen into this category. But who knows if these statistics accurately reflect the true quantity of ‘unintended’ conceptions? I imagine there are far more than is documented.
I think my point (there is one, I swear!) is clear: we live in a society that has largely made procreation appear as inevitable and, consequently, it has become an under-discussed facet of our lives, in which due consideration isn’t countenanced. Moreover, we naively assume that we can then direct our creations to our desired will of the ‘good’ and that our will is ‘good’ – when we can’t know any of this with any certainty. As the above quote from Godwin ( 2015) evinces, people undergo numerous ‘revolutions’ in their character during their lifetime, and it is this lack of ability to know – to know a child’s fate beyond finitude, to know how they will feel and experience existence, to know whether they will be guided in a morally correct way, to know whether that guidance is correct, to know whether it can and/or will in fact be headed – in the face of something as enormous as creating life, that further drives me towards antinatalism. Life is simply too much, there are too many variables, too many possibilities and consequences and outcomes. In what one may consider equally an optimistic and pessimistic conclusion, a child can change the world – but one cannot know how they’ll do so, nor whether or not they’ll conclude, during their being-in-the-world, that they wished never to have been born at all. Thus, the assertions of Webb (1907) and Weinberg (2015) simply appear too unrealistic to me; too utopian.
Moving away from the abstruse to the more actual, Stuart Rachels’s (2014) paper ‘The Immorality of Having Children’ expresses a wholeheartedly antinatalist position but through the examination of the cost of having a child. All of this is underpinned by the Famine Relief Argument (FRA) and a decidedly more material concern with resource allocation and finances than compared to the above. Prior to delineating the FRA, Rachels proffers three principle reasons why procreation is of enormous importance:
‘One’s procreative decisions have causal implications that ripple across one’s world’ (568). Even one child will have a marked environmental impact and an even bigger social impact. A study by Oregon State University (2009) found that the environmental impact of an extra child in the world is almost twenty times more important than a selection of other environmentally sensitive practices (for example, driving a high mileage car, recycling, or using energy-efficient appliances and light bulbs). Additionally, the average long-term carbon impact of a child born in America, along with all their descendants, is more than 160 times the impact of a child born in Bangaladesh. Interestingly, in another article by Rivka Weinberg (2017), discussing the climate emergency in relation to procreation, she states in a rather blasé fashion that we’ll likely sort it all out eventually and concludes: ‘If you want to have a baby, you’d better fix the world, baby. And, apparently, a lot faster than we thought’. The environmental argument is, perhaps, one of the most compelling if not the least easily refuted, and is evoked in a Malthusian, acerbic fervour by Doug Stanhope, here (though I don’t endorse or agree with it, fully):
2. ‘being a parent entails drastically changing one’s lifestyle for at least 18 years. Parenting consumes vast sums of time, money, and energy’ (568).
3. Unlike marriage, ‘parenthood has no morally viable escape hatch. You can divorce your spouse, but you can’t divorce your kids – you can only neglect them’ (268).
Consequently, Rachels (2014) outlines the basis of the FRA: ‘a child costs hundreds of thousands of dollars; that money would be far better spent on famine relief; therefore conceiving and raising children is immoral’ (567). A child will cost, on average, $227,000 (or, by my calculations, at the time of writing this essay: £205,878), which may be better used immunising, feeding, and clothing the already existent children in need across the world (Rachels 2014: 570-1). It is hard to refute this hard, statistical evidence – consanguineous vanity and genetic usury ought to cede to the greater moral good: reducing the suffering of the already existent. The environmental damage is enormous, and the resources we possess could be used in far more ameliorative way, toward negative utilitarianism.
But Rachels (2014) isn’t averse to more distasteful lines of argumentation, such as considering children as ‘luxuries’, as they are ‘expensive’ and are not ‘necessary for the parents health or survival’ (573). Regardless of Kant’s argument about humans being treated as an end-in-themselves, some families, in some cultures, in some countries, procreate in order to survive on farms or in peasant communities in which those extra hands are needed for work. And whilst this reproduces the infamous ‘poverty cycle’ coined by Seebohm Rowntree in his Poverty: A Study of Town Life (1901), this ought to be considered an attendant consequence of global capitalism, and not the failings of parents who are geographically outside of the ‘West’ but not the global rationality (that is, the combined and unequal development and interconnectedness of the world) of a neoliberal agenda that necessitates their subsistence (Dardot & Laval 2017). It is a grave moral injustice that people in ‘the developing world’, especially mothers, cannot exert control over whether they do or not have children due to economic circumstances. Rachels (2014) also states that procreation may be inadvisable due to ‘the possibility of less welcome outcomes. For example, there’s around a 1-in-88 chance that a child today will be autistic’ (579). As one of those ‘less welcome outcomes‘, I’m dismayed by such a framing of autism, particularly since such asseverations circumscribe what autism is by enforcing the prototypical image of autism as something unequivocally negative.
Beyond these tamer discussions is the most vocal and well recognised antinatalist philosopher: David Benatar. His description of experiential asymmetry in his monograph, Better Never to Have Been: The Harm Of Coming Into Existence (2008), is his greatest contribution to the debates surrounding antinatalism. As Joseph Packer (2011) summarises, superbly:
‘the decision to create a new life cannot be evaluated in terms of whether that life would meet some acceptable ratio of good (pleasure) to bad (pain). Instead, the potential life needs to be compared to never coming into existence, which Benatar claims is always preferable to a life with any pain. The reason nonexistence is preferable to existence is because of an asymmetry between how one should evaluate the pain and pleasure of potential persons’ (226).
For Benatar (2008), there is a fundamental difference in how one gauges and understands pain and pleasure, with people inherently biased towards vaunting the latter as their only surety, when we suffer much more than we are willing to acknowledge. This is coupled with an attendant assumption that any human flourishing is pathetic when compared to what can be imagined or envisioned in the grander, macrocosmic scheme of the universe. This unflinching bleakness eventually leads Benatar to conclude that the best humanity can hope for is its own extinction, or speciecide (194-196). This conclusion, and remarks regarding human flourishing, are utterly antithetical to my own position – I view antinatalism as simultaneously deeply solipsistic, private, and altruistic: it is a private, personal philosophy one cannot and should not enforce upon others, which also serves in the best interests of both oneself and the world around you. Though – reflecting in real-time – this description sounds resonant with some kind of Messianic narcissism, a certain looking-down-one’s-nose… Shouldn’t morality be a communal philosophy, not a private code?
A less extreme voice, Joseph Packer (2011), writes in direct response to this Benatarian antinatalism. With a similar thrust to Webb (1907) and a clear taxonomic affinity with Weinberg (2015), Packer (2011) attempts to provide a two-part test for permissible procreation:
‘ask if the creation of a child would increase the net utility in the world absent consideration of the child’s own happiness… In other words, would the parents be happier with the child and would the child not negatively impact the lives of others’ (230)
‘ask if the child would be likely to lead a life with more happiness than unhappiness’
Packer (2011) concludes that this test ‘ensures any procreation that occurs will always result in lives worth living, because every birth will be mutually advantageous to both the person born and people who already exist’ (233). But this, much like the convictions of Webb (1907) and Weinberg (2015), neglects the inherently labile property of human ‘revolutions’; of the great subjective revolutions they will endure; of the war between autonomy and paternalism and the often inevitable victory and subsequent negotiations of the former; and the negative experiential consequences D/EAF entails. The creation of life is a gargantuan euphony of possibilities marred by the cacophonic, chiral mirror of its miseries, with the quiet thrust of the latter, its possibility, proving too much of a risk – particularly when one considers that being a parent can be achieved by other means that ought not to be considered the alternative from the norm: adoption.
Interesting questions arise about how the adoption of a child can affect antinatalist philosophy – with work by Tina Rulli (2014), Joel Feinberg (1973); Oreskovic & Maskew (2008); Alain Badiou (2012); and Ijzendoorn, Van, & Juffer (2006) all having profound philosophical implications on this subject – implications I may try to broach at a later date.
In considering all the above, it is clear that antinatalism is a philosophy coloured by many differing shades of opinion, rooted in experiential, environmental, utilitarian, financial, and opportunity concerns, inter alia. Irrespective of these different angles of enquiry, all of these positions display a profound fixation on the moral question of future and potential suffering in relation to our position as creators and the created – something, regardless of our own respective conclusions, we should all give much greater consideration during our lives.
I don’t have a personal, definitive answer to whether it is better to have n/ever been born. Smith’s (2011) own reflection on his life and procreation, particularly his comment about ultimately enjoying lifein spite of himself, is one I think about often. I am left with the same conclusion as Weinberg (2014): ‘There is no solution’ (135).
Existence makes no promises beyond indefinite finitude and, importantly, it does not require human players. Negative utilitarianism, that is, the belief that minimising potential suffering is of greater moral importance than securing maximum pleasure, is my guiding light. In the course of one’s life, it is paramount to do one’s utmost to ameliorate the suffering of those already here on Earth; to let altruism be the measure of our motions; and to refuse procreation – that creator of indefinite and unknowable suffering – so we can expend our energies on preserving the living present. My own existence is irreversible – I am here. I am here to stay. But how is one to know whether any future lives created will share that conviction? Procreation, to me, is a risk too great for an existence so labile. Moreover, despite my political optimism, the climate emergency marches on and there is no certainty that the world is going to become any better than the current state it is in. The far-right hold the reins. With that in mind, it seems irrefutable (it must be) that my efforts can be better expended helping those who are already here, already bearing the burden of existence, rather than creating indefinite future suffering.
Ijzendoorn, M., H. Van, and Juffer, F., ‘The Emanuel Miller Memorial Lecture 2006: Adoption as Intervention. Meta-analytic Evidence for Massive Catch-up and Plasticity in Physical, Socio-emotional, and Cognitive Development’, Journal of Child Psychology and Psychiatry 47:12, (2006), 1228-45.
‘Working-class stories are not always tales of the underprivileged and dispossessed’. In this anthology, Kit de Waal joins thirty-three writers in ruminating on what ‘working-class’ means in contemporary Britain.
For Barker, working-class writers ‘suddenly became old hat‘ after the brief but enormous popularity of kitchen sink realist writers Alan Sillitoe and John Braine. Of course, this assertion comes with the circumscribed assumption about what working-class writing is: gritty, industrial, post-war narratives from the gregarious outer-London (throw in some Bovril, supped from a tin pail). Barker assumes that since those Angry Young days, working-class writing is no longer popular or fashionable, despite the publication of ‘classic’ contemporary working-class writers such as Janice Galloway and her The Trick is to Keep Breathing (1989); a solipsistic, claustrophobic narrative across time in which that ostensible fulcrum of working-class existence, the ‘kitchen sink’, becomes a protective but suffocating prison for a bereaved woman in Thatcher-era Scotland. Moreover, despite the halcian days of working-class fiction being supposedly long-dead, this didn’t prevent James Kelman from winning the Booker Prize for his How Late it Was, How Late (1994). Working-class writing has never stopped being needed, wanted, and radical.
Nevertheless, for Barker, the recent decision by the London-publishing-bubble to try and provide platforms for working-class and ‘regional’ voices could only be a ‘fashionable’ attempt at post-Referendum appeasement. One may argue, however, that it is far more likely that publishers are simply responding to the demands of the market: people want to read about lives which resonate with them. And, with almost one thousand backers, the successful, crowdfunded publication of Common People via Unbound illustrates that working-class writing is in demand, ‘fashionable’, and popular. It also shows that if the status quo don’t provide platforms for working-class voices to be heard, they’ll go elsewhere and they’ll be heard anyway. It is unsurprising, then, that publishers are initiating, albeit (too) slowly, a long overdue gear-shift via their diversity schemes.
People like me can write books. People like anyone can write books. You have to learn to like the sound of your own voice; you have to trust that your perspective is interesting because it is yours, that you are seeing the world through your own eyes and can use your own words to describe it.”
Cathy Rentzenbrink, ‘Darts’, in Common People, p.81.
Perhaps the principle appeal of Common People lies in its polyvocality. One may criticise the anthology for containing stories of such brevity – with so many contributors, the reflections embedded within are often less than ten pages in length. Yet this quality pointedly asserts one of the anthologies greatest strengths – its destabilisation of the prevailing, monocultural treatment of the working-class. In placing an array of eclectic reflections in close succession, de Waal delivers a collection motivated by agonistic pluralism – that is, a conflicting, contrary, but nevertheless unified whole: in Common People, divergent annals of the affectual and material experiences of being a member of the working-class abound.
Lisa McInerney’s delightfully acerbic ‘Working Class: An Escape Manual’, Katy Massey’s ‘Don’t Mention Class!’, and Chris McCrudden’s ‘Shy Bairns Get Nowt’ all proffer perceptive accounts of the labile and contested nature of what ‘working-class’ means, and the form and function of the status quo in keeping its foot firmly on the windpipe of workers’ self-identification. Contradistinctively, Stuart Maconie’s ‘Little Boxes’ offers a seductive paean to the streets which ‘made me’ (p.54). ‘Keats Avenue, Eliot Drive, Blake Close, Milton Grove… We lived among poets. We fought, drank, and snogged amongst literary giants’ (p.41). For Maconie, whilst conscious of the unjust privilege of the elites, growing up on a Wigan council estate ‘didn’t do me any harm’ (p.54) – rebuffing predominant narratives of a class wholly self-pitying and deprived; dissatisfied with their origins. Conversely, during the start of the anthology, Tony Walsh recapitulates the ever pertinent ‘us vs. them’ in his rousing call-to-arms ‘Tough’:
They don’t like it when our stories rise above the kitchen sink
They don’t like it when we learn, remember, organise or think
They don’t like it when we’ve knowledge so they price us out of college
But it’s tough, we’ve had enough and we are coming
Tony Walsh, ‘Tough’, in Common People, ll.7-11 (p.1)
Further adding to a sense of contrast and nuance, Adelle Stripe and Anita Sethi’s vignettes move away from the working class of England’s metropolises and depict experiences of rural life. Stripes rumination on the bleakness of the moors and the place of her father within it bears strong contrast to the liberating property of the Lake District for Sethi, otherwise imprisoned by the urban landscape of Manchester – revealing England’s pastures green as markers of both revelry and resignation.
Alongside these tales are too many superb entries to adequately do justice to here (buy the book!!): Paul Allen’s formative years on a libidinous building site; Cathy Rentzenbrink’s place as the dart-loving literati of Snaith; how Jill Dawson’s experience of Ted Hughes led her to becoming a writer-in-residence at comprehensive schools; Riley Rockford’s disorientating impostor’s syndrome during a formal dinner at a famous graduate school; and, by far my favourite contribution, Adam Sharp’s ‘Play’. Sharp’s musing – in a mere nine pages – proffers a compelling and affecting account of filial frustration towards a father who refuses to be exactly that. Sharp doesn’t make a pointed or self-conscious attempt to prove his working-class credentials. Rather, with indomitable sensitivity, he delivers a tale of searing honesty that ends with a satisfying – merited – fuck you. Similarly, Alex Wheatle’s letter to his younger self, ‘Dear Nobody’, personifies the various competing pressures within a tortured youth’s psyche, with a vulnerability that can’t fail to endear to all of us who look back in anger at how the working-class are indoctrinated into making an enemy of their futures; of failing to see their worth. And, with the poesy and acuity, but without the ornate and indulgent solipsism, of Elizabeth Smart, Julie Noble’s ‘Detail’ adumbrates the tentative ease in which love enraptures, does, and undoes, proving to be the perfect, intimate swansong for a collection crowded by the manifold pains and pleasures of common people.
My advice is to always, to any member of the working class, get smart, read as much as you can, and find out who’s using you. I did. What’s wrong with you?”
In a rare moment of lucidity and non-cringy candour, Lydon offers timeless and invaluable advice – advice which all of the contributors to Common People have followed. The anthology offers a variety of perspectives from an educated, often autodidactic class; a class as complex in its inner conflicts as it is in its outward resistance – thoroughly undercutting the assumed circumscriptions made by the likes of Pat Barker. One cannot help but feel a tentative excitement and curiousity about the stories that my own generation will pen – in which council houses have not proven to be such pervasive anchors for the proletariat experience – and how this will further complicate portraits of the working-class. And, in the digital epoch and gig economy, perhaps they will illustrate a class far more alienated, far more angry, and far more radical.
The collection closes with a sobering essay by Dave O’Brien, in which he unpicks publishing’s ‘serious class problem, as…one of the most socially exclusive of creative industries’ (p.278). The statistics are as incendiary as they are insightful: the ONS Labour Force Survey found that almost half (47%) of all authors, writers, and translators in Britain began from the most privileged rungs of the social ladder (p.278). Whereas, 12% are from the working-class (p.279). However, despite relative gender balance, the study omits consideration of who retains power within publishing. For, it is evident that women are often excluded from those prestigious roles within the creative professions – such as commissioning. Publishing, too, proves to be pointedly more white than other industries and, as O’Brien reminds us, these statistics don’t offer a holistic picture: who is being excluded from these calculations?
Understandably, then, Kit de Waal’s edited anthology, delivered by the crowd-funded people power of Unbound, proves a much needed breath of fresh air. It would be false and jejune to declare that, for Penguin et al., their days are numbered and they may soon find themselves overwhelmed by the power of outsider publishing – where there are platforms far more willing to provide opportunities for those voices that people most need – and want – to hear. Yet, edifying publications such as Common People are certainly a thorn in their side, and deliver a clear indication for why there needs to be change – a largely untapped, fecund pool of talent and experience lives within and amongst the working class. Yet, regardless of the future direction of publishing – and that of working-class literature – Common People stands tall as a hopeful beacon: that we willstand up, we willbe counted, and we willbe heard.
I had to develop my own ways of dealing with being different. By the time I had got to university I’d come up with a strategy, and the strategy was really simple: don’t interact with people of your own age, just turn up, get straight As, and I wouldn’t speak to anyone.”
Chris Packham in Asperger’s & Me
In his documentary Asperger’s & Me, Chris Packham, naturalist and broadcaster, discussed the difficulty of living with Asperger’s without a diagnosis, exploring the nature of the condition and the effect of his diagnosis later in life during the early 2000s. Conversely, and I think more fortunately, my (second) (re-)diagnosis came when I was sixteen.
Originally, my appointment at the Newland House NHS Trust in Northampton had been scheduled, after meetings with my GP, to discuss severe depression and further treatment options. I was utterly miserable and unproductive, with my gauche isolation constructing an ever growing chasm between myself and the world around me. I’d always been odd and obstreperous. Looking back over my medical records, its resonant to see that even at six and seven years old I kept myself to myself and didn’t have many friends at school. It was hard for me to identify and put into words why I felt the way I did: about myself, the world around me, my being-in-the-world and that sense of being a step apart. During my appointment, I described my state. After several hours of consultation and looking over previous medical notes (particularly my initial diagnosis of ADHD-ASD in 2005 and subsequent ‘undiagnosis’ of ADHD as a small boy) the doctor concluded I was autistic; that I had Asperger’s. This was, to her mind, the major source of my depression – an inability to negotiate the world around me.
This came as shock to me. My mum had assumed that, due to the ADHD diagnosis being thrown out, the ASD diagnosis had been thrown out too. Blind to all else but the immediacy of this diagnosis (or blunt re-diagnosis), I watched the doctor print off some reading material for me about accessing council support services for autistics. She then informed me there was no ‘cure’ for my condition, and I left Newland House with my quietly bemused Mum. How was I to know this diagnosis was right? The ADHD label had been switched out before, with much difficulty and testing. As if to cement this clarification, a few days later, I received a dense letter delineating the doctor’s findings and conclusion: Christopher is autistic. Understanding this ‘condition’ and gaining access to support services will ameliorate if not etiolate the depression.
I didn’t feel liberated in the years immediately succeeding my diagnosis, and its been a comfort to find that I am not the only person who has felt a pronounced sense of shame alongside increased unease about why I feel isolated and unreachable. What is autism? What does being autistic mean? Wikipedia was distressing: declaring the ‘prognosis’ for autistics was ‘frequently poor’. My already questionable self-esteem plummeted, compounded by gaining a place at a university despite not having the grades they said I would need. However, learning more about autism has made me feel assured in the diagnosis and more at ease with who I am – and, for the past two years, I’ve come to grow more familiar with myself, my capacities and limitations, and have been able to develop methods of coping with depression.
What is ASD?
Leaping up in public to do something wildly expressive and then quickly retreating back into my shell seemed, well, sort of normal to me. Maybe normal is the wrong word. A study in the British Journal of Psychiatry in 1994 by Felix Post claimed that 69% of the creative individuals he’d studied had mental disorders… I have become to believe that you can escape your demons and still tap the well.
David Byrne, discussing his Asperger’s in How Music Works, pp.38-9.
Whilst I disagree with Byrne about whether autism (ASD) is a mental disorder, or a source of ‘demons’, he remains a powerful influence on my life as an autistic artist. His lyricism, elucidating that ‘Martian’ or ‘alien’ perspective of human relationships and being-in-the-world resonates deeply with me. But what exactly is autism?
Around 700,000 people have Asperger’s Syndrome in the United Kingdom – that’s roughly 1 in 100 – most of whom are males, though women have it too. Research pushes on, but the exact etiology of autism remains unknown. No two autistic people are exactly the same – further separating the condition from more overt, uniform physical disabilities. The way I see, hear, process, and feel the world around me differs markedly from non-autistic or ‘neurotypical’ people in terms of hypo- and hyper-sensitivity. But autists don’t, as the National Autism Society observes, ‘look’ disabled or immediately different in all cases. It all lies largely in our conduct – hence why some (usually not self-advocates) refer to autism as a ‘hidden disability’ (it’s not hidden for us).
Whilst certain difficulties are shared amongst those with autism, the way in which these difficulties affect each person, and to what extent, differ. The major difficulties often encompass the social fulcrum in which life pivots upon, that is, the ability to understand other people’s intentions, purposes, and what they mean communicatively and to respond to them ‘correctly’. Partaking in the diurnal machinations of family, school, and work life can be – and is for me – exhausting. Chris Packham, David Byrne, Jack Monroe, and Gary Numan are some of the popular figures that have implictly given me support through their discussions of living as ‘high-functioning’ autists, and they continue to give me hope as a person with what feel like quasi-Faustian aspirations (socially and professionally).
In my case, I have a high-tactile sensitivity which has often caused significant difficulties – I hate and lash out against certain physical pressures, typically when being touched; hand-dryers in bathrooms drive me to tears (though now I just bite my tongue, hard, which helps – or avoid them entirely and try to find a bathroom with paper towels); I’m easily transfixed into solipsism by certain lights, colours, and sounds. Fireworks are interesting one: I adore the lights but it feels like there is a black-hole-cringe inside me when they explode. This has become easier to handle compared to when I was younger and I can attend displays now (with some hard blinking/flinching). Leaving my university library in midnight winter, I saw the low-humming, yellow pallor of a street lamp and that was it – I parked my bike, found a bench, and spent an inordinate amount of time just staring at it. The hum and the low-lighting coalesced in an unprecedented way. I frequently stim: that is, repetitive self-stimulating behaviours. This can be auditory, visual, and spatial. Orally/aurally, I have certain sounds, noises, expressions that bring me comfort when repeated. Spatially, I will often flap, moving my arms and legs in repetitive motions, walking around in repetitive, looping circles. I also like to watch the same film and video repeatedly, or listen to the same music track, repeatedly, often for weeks and months. Much of my time is thus spent with headphones on. I recall one of my undergraduate lecturers expressing concern about how often I wore earphones – immediately putting them on during seminar ‘breaks’ and after a seminar finished, and always entering seminars with them on until the last possible moment.
I try to avoid spontaneous and non-specific conversation and often plan all my conversations, anticipating the subjects most likely to crop up during conversation and what responses I can give that will satisfy the requirements of that conversation. I’m pretty much always anticipating the social and professional future – what I need to be tackling next and what may be coming. This vigilance is exhausting and is coupled with time-blindness. By time-blindness, I mean that this perfectionism, socially and professionally, takes up an inordinate amount of time and I’m often ignorant to how much time this is taking and how to effectively prioritise because I have to get it right. So I struggle with meeting my deadlines. If I do meet my deadlines, I’m utterly fatigued (but the work is often of a high standard because of the perfectionism). This time-blindness blends in, I think, with being unable to read when my body needs things: I sometimes forget to eat, and very often forget to drink – meaning I wake up in the early hours and consume vast quantities of water.
Regarding communication, I do remember during my first year of university, I fell through the door to my Academic Community meeting, late, stating ‘Sorry I’m late – I was eating an apple and got lost in the moment’. I was perfectly serious and was met with laughter, which, in hindsight, is understandable… I realise, now, that the sincerity and absurdity was unintentionally comedic. But I was trying to offer a serious explanation: that apple was gripping. I’ve often been told by people that they can’t tell if I’m serious or being sarcastic. The management of tone, expression, and intention is particularly difficult. I find it difficult to understand what people say and what people mean, via tone of voice and knowing when they are joking (the parasocial); I avoid supermarkets and people-packed places due to the unpredictability and anticipated infringement of my sense of control; I struggle to physically stay still and often have to move about to stay comfortable when sedentary; I have repetitive behavioural patterns and cling to my routines; and I often use long, unusual, or complicated lexis in daily conversation – because that just feels natural. I can be silent for a long time, when I’m gestating and processing everything around me, and will then burst into loquacious animation. Everyday I wake up with pillows flung from the bed and the duvet on the floor, because I move excessively in my sleep – I have often awoke with light bruises, aches, and pains. Poor sleep and an overactive mind are frequent occurrences for those on the spectrum.
Being autistic has led to me receiving a lot of labels: arrogant, distant, difficult, unemotional, quirky, unusual, inter alia. It has also resulted in a lot of bullying.
So, at university, I intensified my use of that strategy that, unbeknownst to me, Packham had also pursued decades earlier whilst he was at university: stay away from your contemporaries, work hard, and leave. I often feel saddened about the fact that I’ve never really experienced the ‘social’ aspect of university. I like my own company. I like me. But making that overlap with university and people has been hard. I think for a lot of autistic people, that need to feel liked and validated and to interact with people is there – it’s just hard to find a way in that doesn’t feel too compromising and taxing.
My time at university
Now I have just finished my undergraduate degree, the epigraph of this essay has a profound, validating property. I’ve spent most of the past three years sequestered away in my bedroom or, in the last year, studying in the library late at night (when it is finally quiet). The opportunities to go out and to be included did present themselves in the beginning, but I rejected them – it felt like the only way I could exist without feeling uncomfortable, or like I was going to be scrutinised, or be considered unusual. As long as I was alone, I had a greater chance of being content and in control. Being around my peers made me feel utterly alienated by their ease, their speed of recognition, their ability to conform, fit in, tune in, and turn off. It looked (and felt) so draining.
I didn’t want to party, or go to clubs and pubs; I didn’t want to drink; I didn’t want to make small talk; I didn’t want to be part of an extracurricular society; and I couldn’t understand why other people cared or would want to either, or why or how I should care in turn; all I wanted to do was work, namely talking and writing about music, literature, politics, morality, and ideas. The university was a workplace. I was there to work. I’m sure my background probably impacted this perception. I didn’t want people to think that there was nothing else beyond my work-orientation, however true that may be, and subsequently peg me as some one-track mind freak. In short, I wasn’t gregarious. Nine times out of ten, if I saw or spoke to anybody it was when I saw the very few people who I knew from a lecture theatre, in a lecture theatre. I spoke more often to my academics, since our group seminars were often me on full blast and office hours were a perfect social scenario (they have a clear purpose, I can talk about work as much as I want, they have a clear time schedule).
That isn’t to say that university hasn’t been an alienating and isolating experience, nor that I am some unemotional machine. I feel things incredibly deeply, with high levels of attachment and resonance – when a poem, play, painting, novel, film, or a song breaks through, I latch on to it with every sinew of my being. I just haven’t felt that in the company of other people. Rather, being around other people often actualises alterity, breaks the hermetic seal I have built, and often creates an enormous amount of stress: navigating conversations, knowing what is and isn’t permissible and when to/not to say it, parsing body language. My instinct is to be blunt, direct, and singular – but that can appear standoffish. So manufacturing a social face is a draining requirement. Practicing eye contact duration and gestures in the mirror at home has helped, I think. People can bring me a lot of joy + happiness, but they tend to be the ones I’ve known all my life or for a very long time – people whom I feel comfortable with and who I know won’t crucify me for some social peccadillo or major mistake. It can take an awfully long time to unwind around other people.
Regarding all I’ve said: autism is not a lodestone, and my ‘condition’ does not simply or solely pose obstacles. My autism is the source of all the qualities that make me good at what I do: namely, reading and writing critically about literature. As it turns out, people with Asperger’s tend to be rather bright! My mind is frenetic, constantly making connections in and across the books I am reading and have read, enabling me to identify patterns and weave complex arguments often sui generis within my cohort.
I don’t mean to reinforce sterotypes but, in the past, I’ve been able to sit down and read and write constantly with a level of unrivalled concentration, and feel a sense of joy and motivation and purposiveness I find indescribable. I have heaps and heaps of writing, journals, notebooks, digital memos, and chicken scratchings – I write just for the sake of it, to get it out of me. ‘Academic’ writing, however, can be like getting blood from a stone. But, when I reach that same state of hyperfocus (often when editing/redrafting), I feel absorbed, like I’ve entered another layer in the environment.
I haven’t felt able to talk to anyone about being autistic (because I don’t know any autistic people). Apart from with one allistic friend, off-hand, in a throwaway kind of way, and it wasn’t followed up or questioned. So I didn’t pursue the subject further. I just don’t think most people, young or old, are equipped to discuss autism. Though, I never told my academics either, largely because I was afraid of facing prejudice or misunderstanding. People would start putting me in boxes they’ve built about something they don’t know about and/or they would begin acting like they are walking on eggshells. When it came to my writing, I was regarded as bright and committed – unusually so, producing ‘unusual’ work, as I was repeatedly told – but they weren’t antsy about me and my work, not like I thought they would be if ‘autistic’ and ‘Asperger’s’ st(r)uck upon their lexicon. Forrest Dunbar, another student, had a much different experience studying English. But I never said anything. I think fear either overwhelms faith, or becomes faith. I was doing well academically though struggling, immensely, socially. In thinking about broaching my struggle with staff, I had seen ‘the moment of my greatness flicker… and in short, I was afraid’.
It’s summer now. The final year is over. My academic work and interests remain what primarily give me security and value in a world where my own value is measured in a completely different, social currency – where my social value cannot be gauged and accessed easily and I find it hard to reciprocate and engage those who try. I don’t feel ashamed of my brain. I wouldn’t ever want to ‘give up’ being autistic. Though it means, socially, I am an island, it is also part of how I won my MPhil scholarship and will begin postgraduate studies at one of the most prestigious universities in the world later this year – achieved because of and not in spite of my autism. It is part of what makes me unique and a good worker. I wouldn’t change how I spent my time as an undergraduate. Whilst I spent most of it alone for my own security and well-being, I also spent it making triumphs and achievements. I got to study in Canada, something I have wanted to do since I was fourteen; I won several academic awards and grants; I became the ‘best’ UG student in my university department; I helped a lot of people through the volunteering I did outside of university, teaching in a primary school and working in a food bank; and I raised a considerable amount of money for charity with my half-marathon, inter alia. Being autistic means I am different and face some exhausting and frustrating obstacles, but I am also capable.
So though I ‘um’ and ‘ah’ at the Janus of great change, I close here with Max Ehrmann in my mind:
Be gentle with yourself. You are a child of the universe no less than the trees and the stars; you have a right to be here… And whatever your labors and aspirations, in the noisy confusion of life, keep peace in your soul… Be cheerful. Strive to be happy.
In penning an apologia, it would appear that I’m on the backfoot before this blog has even begun. And, yet, it feels inexplicably neccessary. Whilst lacking the theological tensions, intentions, and pretensions of John Henry Newman, it feels important – in a veritable ocean of bloggers more articulate than I – to adumbrate what this blog is for; what does it intend to achieve; why; and how?
This blog, in part, is my attempt to ensure I avoid the ahedonic lassitude and objectlessness I think will come to the fore now I’ve finished my English BA. I suppose its analogous to long-distance running: if you suddenly stop working at the rate in which you have, immediately, after traversing a great distance (or so one hopes), you can severely damage your health – in more ways than one. And, so far, every time I finish an academic year, or am between semesters, I have inevitably crashed, fallen ill, and have had to build myself back up again due to suddenly stopping for a breather. Ergo, in making this my project for the forseeable future, then, I hope to move myself into a calmer though still industrious state of mind!
All men who have turned out worth anything have had the chief hand in their own education” – Sir Walter Scott
But what is the purview of this ostentatiously dubbed ‘project’? Well, now I have a marked degree more autonomy, I can throw myself into reading and re-reading books that have drawn my eye before, during, and after my formal studies, but that have sadly had to take a backseat in the face of more pressing demands. Scott’s quote articulates my sentiment succinctly and, as an adept truant yet continuously bookish, back-bedroom casualty, I feel like reading and writing about all that ensnares me should prove a pursuit easily motivated.
There are reams of Modern and Contemporary prose, poetry, plays, and philosophy (all the p words!) I want to provide critical exegeses of – here – in order to keep myself challenged and occupied during this interstice between UG and PG study. Having this space will enable me to run the gambut of them, hopefully providing a sui generis analysis of the political and contemporary valences of an array of (largely) twentieth and twenty-first century texts, music, and art. Who knows? Moreover, I hope to share opinion pieces reflective of my experiences. Regardless of what is to come, I don’t intend to broach this with too rigorous a schedule; nor to declare any intentions beyond writing, something, and challenging myself to exceed these four walls.
In a word, welcome; welcome to my self-indulgent, self-care project/warehouse/blog! Hopefully you’ll find something of interest here in the not too distant future. Or not.