Is there a doctrine or manifesto of cooperative distrust? Because I think that’s what we need today, in the face of reams of government data — almost all of it, in fact — that is untrustworthy, and the only way it can support our democracy is if the public response to it (if and when it becomes available in the public domain) is led by cooperative distrust: one and all distrusting it, investigating the specific way in which it has been distorted, undoing that distortion and, finally, reassessing the data.
The distrust here needs to be cooperative not to undermine the data (and thus avoid spiralling off into conspiracies) but to counteract the effects of ‘bad data’ on ethical public governance. There are some things that we the public trust our government to not undercut – but our present one has consistently undercut government while empowering the party whose members occupy it.
In the latest, and quite egregious, example, the Indian government has said an empowered committee it set up during the country’s devastating second COVID-19 outbreak to manage the supply of medical oxygen does not exist. Either the government really didn’t create the committee and lied during the second wave or it created the committee but is desperately trying to hide its proceedings now by lying. Either way, this is a new low. But more pertinently, the government is behaving this way because it seems to be intent on managing each event to the party’s utmost favour – pointing to a committee when having one is favourable, pretending it didn’t exist when it is unfavourable – without paying attention to the implications for the public memory of government action.
Specifically, the government’s views at different points of time don’t – can’t – fit on one self-consistent timeline because its reality in, say, April 2021 differs from its reality in August 2021. But to consummate its history-rewrite, it has some commentators’ help; given enough time, OpIndia and its ilk are sure to manufacture explanations for why there never was a medical oxygen committee. On the other hand, what do the people remember? Irrespective of public memory, public attention is more restricted and increasingly more short-lived, and it has always boded poorly that both sections of the press and the national government have been comfortable with taking advantage of this ‘feature’, for profits, electoral gains, etc.
Just as there is a difference between what the world really looks like and what humans see (with their eyes and brains), there is a significant difference between history and memory. Today, remembering that there was a medical oxygen committee depends simply on recent memory; one more year and remembering the same thing will also demand the inclination to distrust the government’s official line and reach for the history books (so to speak).
But the same government has also been eroding this inclination – with carrots as well as sticks – and it will continue, resulting ultimately in the asymptotic, but fallacious and anti-democratic, convergence of history and memory. Cooperative distrust can be a useful intervention here, especially as a matter of habit, to continuously reconcile history and memory (at least to the extent to which they concern facts) into a self-consistent whole at every moment, instead of whenever an overt conflict of facts arises.
Considering how much the Government of India has missed anticipating – the rise of a second wave of COVID-19 infections, the crippling medical oxygen shortage, the circulation of new variants of concern – I have been wondering about why we assemble giant institutions like governments: among other things, they are to weather uncertainty as best as our resources and constitutional moralities will allow. Does this mean bigger the institution, the farther into the future it will be able to see? (I’m assuming here a heuristic that we normally are able to see, say, a day into the future with 51% uncertainty – slightly better than chance – for each event in this period.)
Imagine behemoth structures like the revamped Central Vista in New Delhi and other stonier buildings in other cities and towns, the tentacles of state control dictating terms in every conceivable niche of daily life, and a prodigious bureaucracy manifested as tens of thousands of civil servants most of whom do nothing more than play musical chairs with The Paperwork.
Can such a super-institution see farther into the future? It should be able to, I’d expect, considering the future – in one telling – is mostly history filtered through our knowledge, imagination, priorities and memories in the present. A larger government should be able to achieve this feat by amassing the talents of more people in its employ, labouring in more and more fields of study and experiment, effectively shining millions of tiny torchlights into the great dark of what’s to come.
Imagine one day that the Super Government’s structures grow so big, so vast that all the ministers determine to float it off into space, to give it as much room as it needs to expand, so that it may perform its mysterious duties better – something like the City of a Thousand Planets.
The people of Earth watch as the extraterrestrial body grows bigger and bigger, heavier and heavier. It attracts the attention of aliens, who are bemused and write in their notebooks: “One could, in principle, imagine ‘creatures’ that are far larger. If we draw on Landauer’s principle describing the minimum energy for computation, and if we assume that the energy resources of an ultra-massive, ultra-slothful, multi-cellular organism are devoted only to slowly reproducing its cells, we find that problems of mechanical support outstrip heat transport as the ultimate limiting factor to growth. At these scales, though, it becomes unclear what such a creature would do, or how it might have evolved.”
One day, after many years of attaching thousands of additional rooms, corridors, cabinets and canteens to its corse, the government emits a gigantic creaking sound, and collapses into a black hole. On the outside, black holes are dull: they just pull things towards them. That the pulled things undergo mind-boggling distortions and eventual disintegration is a triviality. The fun part is what happens on the inside – where spacetime, instead of being an infinite fabric, is curved in on itself. Here, time moves sideways, perpendicular to the direction in which it flows on the outside, in a state of “perpetual freefall”. The torch-wielding scientists, managers, IAS officers, teachers, thinkers are all trapped on the inner surface of a relentless sphere, running round and round, shining their lights to look not into the actual future but to find their way within the government itself.
None of them can turn around to see who it is that’s chasing them, or whom they’re chasing. The future is lost to them. Their knowledge of history is only marginally better: they have books to tell them what happened, according to a few historians at one point of time; they can’t know what the future can teach us about history. And what they already know they constantly mix and remix until, someday, like the progeny of generations of incest, what emerges is a disgusting object of fascination.
The government project is complete: it is so big that it can no longer see past itself.
Just the other day, I’d mentioned to a friend that Steven Pinker was one of those rare people whose ideas couldn’t be appreciated by proxy, such as through the opinions of other authority figures, but had to be processed individually. This is because Pinker has found as much support as he has detraction – from Jerry Coyne’s Why Evolution is True on the one hand to P.Z. Myers’s Pharyngula on the other. As an aspiring rationalist, it’s hard for me to place Pinker on the genius-lunatic circle because it’s hard to see how his own ideas are self-consistent, or how all of his ideas sit on a common plane of reason.
A 2013 article Pinker wrote in The New Republic only added to this dilemma. The article argued that science was not an enemy of the humanities, with Pinker trying to denounce whatever he thought others thought “scientism” stood for. He argued that ‘scientism’ was not the idea that “everything is about science”, rather a commitment to two ideals: intelligibility and that “the acquisition of knowledge is hard”. This is a reasonable elucidation necessary to redefine the role and place of science in today’s jingoistic societies.
However, Pinker manages to mangle the rest of the article with what I hope (but can’t really believe to be) was pure carelessness – even though this is also difficult to believe because we all seem to have this fixation at the back of our minds that Pinker is a smart man. He manages to define everything he thinks is in this world worth defining from the POV of natural science alone. Consider these lines:
Though the scientific facts do not by themselves dictate values, they certainly hem in the possibilities. By stripping ecclesiastical authority of its credibility on factual matters, they cast doubt on its claims to certitude in matters of morality. The scientific refutation of the theory of vengeful gods and occult forces undermines practices such as human sacrifice, witch hunts, faith healing, trial by ordeal, and the persecution of heretics.
Pinker has completely left out subjects like sociology and anthropology in his definition of the world and the values its people harbour. Though he acknowledges that “scientific facts don’t by themselves dictate values”, he’s also pompous enough to claim scientific reasoning alone has undermined human sacrifice, witch hunts, etc. Then why is it that senior ISRO officials, who are well-educated rocket scientists, offer rocket models at temples before upcoming launches? Why is it that IT employees who migrate from Chennai and Bangalore to California still believe that the caste system is an idea worth respecting?
The facts of science, by exposing the absence of purpose in the laws governing the universe, force us to take responsibility for the welfare of ourselves, our species, and our planet.
This seems to make logical sense… until you pause and wonder if that’s how people actually think. Did we decide to take control of our own welfare because “the laws governing the universe lack purpose”? Of course not. I’m actually tempted to argue that the laws governing the universe have been stripped of the ability to govern anthropic matters because we decided to take control of our welfare.
In fact, Pinker imputes the humanities and social sciences with intentions most institutions that study them likely don’t have. He also appropriates the ideas of pre-18th-century thinkers into the fold of science when it would’ve been wrong to do so: Hume, Leibniz and Kant (to pick only those philosophers whose work I’m familiar with) were not scientists. In fact, somehow, the one person who would’ve been useful to appropriate for the purposes of Pinker’s argument was left out: Roger Bacon. Then, deeper into the piece, there’s this:
The humanities have yet to recover from the disaster of postmodernism, with its defiant obscurantism, dogmatic relativism, and suffocating political correctness. And they have failed to define a progressive agenda. Several university presidents and provosts have lamented to me that when a scientist comes into their office, it’s to announce some exciting new research opportunity and demand the resources to pursue it. When a humanities scholar drops by, it’s to plead for respect for the way things have always been done.
With sweeping statements like these, Pinker leaves his head vulnerable to being bitten off (like here). At the same time, his conception of “scientism” burns bright like a gemstone lying in the gutter. Why can’t you be more clear cut like the gem, Pinker, and make it easier for all of us to get the hang of you? Can I trust in your definition of ‘scientism’ or should I wonder how you came upon it given the other silly things you believe? (Consider this: “The definitional vacuum [of what ‘scientism’ means] allows me to replicate gay activists’ flaunting of ‘queer’ and appropriate the pejorative for a position I am prepared to defend.” When was ‘queer’ ever a pejorative among gender/sexuality rights activists?) Oh, why are you making me think!
As I languished in the midst of this quandary and contemplated doing some actual work to get to the bottom of the Pinker puzzle, I came upon a review of his book Enlightenment Now (2018) authored by George Monbiot, whom I’ve always wholeheartedly agreed with. Here we go, I thought, and I wasn’t disappointed: Monbiot takes a clear position. In a bristling piece for The Guardian, Monbiot accuses Pinker of cherry-picking data and, in a few instances, misrepresenting facts to reach conclusions more favourable to his worldview, as a result coming off as an inadvertent apologist for capitalism. Excerpt:
Pinker suggests that the environmental impact of nations follows the same trajectory, claiming that the “environmental Kuznets Curve” shows they become cleaner as they get richer. To support this point, he compares Nordic countries with Afghanistan and Bangladesh. It is true that they do better on indicators such as air and water quality, as long as you disregard their impacts overseas. But when you look at the whole picture, including carbon emissions, you discover the opposite. The ecological footprints of Afghanistan and Bangladesh (namely the area required to provide the resources they use) are, respectively, 0.9 and 0.7 hectares per person. Norway’s is 5.8, Sweden’s is 6.5 and Finland, that paragon of environmental virtue, comes in at 6.7.
Pinker seems unaware of the controversies surrounding the Kuznets Curve, and the large body of data that appears to undermine it. The same applies to the other grand claims with which he sweeps through this subject. He relies on highly tendentious interlocutors to interpret this alien field for him. If you are going to use people like US ecomodernist Stewart Brand and the former head of Northern Rock Matt Ridley as your sources, you need to double-check their assertions. Pinker insults the Enlightenment principles he claims to defend.
To make sure I wasn’t making a mistake, I went through all of Coyne’s posts written in support of Pinker. It would seem that while there’s much to admire in his words, especially those concerning his area of expertise – psycholinguistics – Pinker either falls short when articulating his worldview or, more likely, the moment he steps out of his comfort zone and begins addressing the humanities, goes cuckoo. Coyne repeatedly asserts that Pinker is a classic progressive liberal who’s constantly misunderstood because he refuses to gloss over matters of political correctness that the authoritarian left doesn’t want you to discuss. But it’s really hard to stand by him when – like Monbiot says about Enlightenment Now – he’s accused of misrepresenting rape statistics in The Better Angels of Our Nature (2011).
Anyway, the Princeton historian David Bell also joined in with a scathing review for The Nation, where he called Enlightenment Now a 20-hour TED talk pushing history as having been “just so” instead of acknowledging the many people’s movements and struggles that deliberately made it so.
Pinker’s problems with history are compounded even further as he tries to defend the Enlightenment against the many scholarly critics who have pointed, over the centuries, to some of its possible baleful consequences. Did Enlightenment forms of reasoning and scientific inquiry lie behind modern biological racism and eugenics? Behind the insistence that women do not have the mental capacity for full citizenship? Not at all, Pinker assures us. That was just a matter of bad science.
Indeed, it was. But Pinker largely fails to deal with the inconvenient fact that, at the time, it was not so obviously bad science. The defenders of these repellent theories, used to justify manifold forms of oppression, were published in scientific journals and appealed to the same standards of reason and utility upheld by Pinker. “Science” did not by itself inevitably beget these theories, but it did provide a new language and new forms of reasoning to justify inequality and oppression and new ways of thinking about and categorizing natural phenomena that suggested to many an immutable hierarchy of human races, the sexes, and the able and disabled. The later disproving of these theories did not just come about because better science prevailed over worse science. It came about as well because of the moral and political activism that forced scientists to question data and conclusions they had largely taken for granted.
It seems Pinker may not be playing as fast and loose with facts, philosophy and the future as sci-fi writers like Yuval Noah Harari (whose Homo Deus is the reason I’ve not read historical surveys since; I recommend John Sexton’s takedown) have, but he’s probably just as bad for riding a cult of personality that has brought, and continues to bring, him an audience that will listen to him even though he’s a psycholinguist monologuing about Enlightenment philosophy. And what’s more, all the reviews I can find of Enlightenment Now have different versions of the same complaints Monbiot and Bell have made.
So I’m going to wilfully succumb to two of the cognitive biases Pinker says blinkers our worldview and makes things seem more hopeless than they are – availability and negativity – and kick Enlightenment Now off my todo list.
In sum: what keeps Pinker au courant is his optimism. If only it weren’t so misinformed in its fundamentals…
Hat-tip to Omair Ahmad for flagging the New Republic article. Featured image: Steven Pinker. Credit: sfupamr/Flickr, CC BY 2.0.
For someone who reads very slowly (a 300-page book usually takes a week), Eric Hobsbawm’s Age of Extremes offered an astonishingly enjoyable experience*. A week after I picked it up at a secondhand books store, I’m 534 pages in and keep going back to it. While Hobsbawm’s celebrated breadth of knowledge intimidated me enough to get writer’s block, the book exhibits just the right level of topical fluency, insightfulness and, fortunately, snark.
My only grouse is that Hobsbawm had to have a separate section on the natural sciences in the book’s last chapter. As a result, it is as if he acknowledges that the unique traits of 20th century science don’t quite fit into the stories of anything else that happened in 1914-1991 – which is disappointing. It requires the reader to assimilate advances in quantum mechanics, relativity, semiconductor electronics and ICT by themselves and not together with how the 77 years panned out politically, economically and socially. Of course, Hobsbawm tries every now and then (in the natural sciences section) to contextualise scientific and technological advancements in issues and narratives of societal development, but this doesn’t quite click.
Nonetheless, Age of Extremes is highly recommended, doubly so because, even if the science section seems like an afterthought, it still offers a carefully considered picture of modern science and its philosophical roots. (While some sections seemed facile, this may have been because I regularly read on these topics.) One paragraph in particular (p. 530) caught my eye: Hobsbawm argues that anti-science beliefs took root in the world because its subjects were becoming increasingly specialised, abstracted, and whose contents were becoming removed further from both common sense and sense experience – and, ultimately, from the common man. He then offers the following:
The suspicion and fear of science was fuelled by four feelings: that science was incomprehensible; that both its practical and moral consequences were unpredictable and probably catastrophic; and that it underlined the helplessness of the individual, and undermined authority. Nor should we overlook the sentiment that, to the extent that science interfered with the natural order of things, it was inherently dangerous.
In these lines, I see the perfect raison d’être of the science journalist. It is the task of the science journalist to dispel the pall of inaccessibility and incomprehensibility surrounding science, to lay out its practical and moral consequences, to inspire confidence in those who would doubt its effects, to invite them to participate in it, and to expose its processes so scientists cannot claimauthority over the ignorant. And if Authority perceives a threat to itself emerging from science, it is likelier than not that it is advocating for a scientific idea that is of Authority’s own making and that it is not a ‘natural entity’. In this case, the exposition of the processes of science can be used to challenge Authority.
The Wire published a story about the ‘atoms of Acharya Kanad‘ (background here; tl;dr: Folks at a university in Gujarat claimed an ancient Indian sage had put forth the theory of atoms centuries before John Dalton showed up). The story in question was by a professor of philosophy at IISER, Mohali, and he makes a solid case (not unfamiliar to many of us) as to why Kanad, the sage, didn’t talk about atoms specifically because he was making a speculative statement under the Vaisheshika school of Hindu philosophy that he founded. What got me thinking were the last few lines of his piece, where he insists that empiricism is the foundation of modern science, and that something that doesn’t cater to it can’t be scientific. And you probably know what I’m going to say next. “String theory”, right?
No. Well, maybe. While string theory has become something of a fashionable example of non-empirical science, it isn’t the only example. It’s in fact a subset of a larger group of systems that don’t rely on empirical evidence to progress. These systems are called formal systems, or formal sciences, and they include logic, mathematics, information theory and linguistics. (String theory’s reliance on advanced mathematics makes it more formal than natural – as in the natural sciences.) And the dichotomous characterisation of formal and natural sciences (the latter including the social sciences) is superseded by a larger, more authoritative dichotomy*: between rationalism and empiricism. Rationalism prefers knowledge that has been deduced through logic and reasoning; empiricism prioritises knowledge that has been experienced. As a result, it shouldn’t be a surprise at all that debates about which side is right (insofar as it’s possible to be absolutely right – which I don’t think everwill happen) play out in the realm of science. And squarely within the realm of science, I’d like to use a recent example to provide some perspective.
Last week, scientists discovered that time crystals exist. I wrote a longish piece here tracing the origins and evolution of this exotic form of matter, and what it is that scientists have really discovered. Again, a tl;dr version: in 2012, Frank Wilczek and Alfred Shapere posited that a certain arrangement of atoms (a so-called ‘time crystal’) in their ground state could be in motion. This could sound pithy to you if you were unfamiliar with what ground state meant: absolute zero, the thermodynamic condition wherein an object has no energy whatsoever to do anything else but simply exist. So how could such a thing be in motion? The interesting thing here is that though Shapere-Wilczek’s original paper did not identify a natural scenario in which this could be made to happen, they were able to prove that it could happen formally. That is, they found that the mathematics of the physics underlying the phenomenon did not disallow the existence of time crystals (as they’d posited it).
It’s pertinent that Shapere and Wilczek turned out to be wrong. By late 2013, rigorous proofs had showed up in the scientific literature demonstrating that ground-state, or equilibrium, time crystals could not exist – but that non-equilibrium time crystals with their own unique properties could. The discovery made last week was of the latter kind. Shapere and Wilczek have both acknowledged that their math was wrong. But what I’m pointing at here is the conviction behind the claim that forms of matter called time crystals could exist, motivated by the fact that mathematics did not prohibit it. Yes, Shapere and Wilczek did have to modify their theory based on empirical evidence (indirectly, as it contributed to the rise of the first counter-arguments), but it’s undeniable that the original idea was born, and persisted with, simply through a process of discovery that did not involve sense-experience.
In the same vein, much of the disappointment experienced by many particle physicists today is because of a grating mismatch between formalism – in the form of theories of physics that predict as-yet undiscovered particles – and empiricism – the inability of the LHC to find these particles despite looking repeatedly and hard in the areas where the math says they should be. The physicists wouldn’t be disappointed if they thought empiricism was the be-all of modern science; they’d in fact have been rebuffed much earlier. For another example, this also applies to the idea of naturalness, an aesthetically (and more formally) enshrined idea that the forces of nature should have certain values, whereas in reality they don’t. As a result, physicists think something about their reality is broken instead of thinking something about their way of reasoning is broken. And so they’re sitting at an impasse, as if at the threshold of a higher-dimensional universe they may never be allowed to enter.
I think this is important in the study of the philosophy of science because if we’re able to keep in mind that humans are emotional and that our emotions have significant real-world consequences, we’d not only be better at understanding where knowledge comes from. We’d also become more sensitive to the various sources of knowledge (whether scientific, social, cultural or religious) and their unique domains of applicability, even if we’re pretty picky, and often silly, at the moment about how each of them ought to be treated (Related/recommended: Hilary Putnam’s way of thinking).
*I don’t like dichotomies. They’re too cut-and-dried a conceptualisation.
Through an oped in Nieman Lab, Ken Doctor makes a timely case for explanatory – or explainer – journalism being far from a passing fad. Across the many factors that he argues contribute to its rise and persistence in western markets, there is evidence that he believes explainer journalism’s historical basis is more relevant than its technological one, most simply by virtue of having been necessitated by traditional journalism no longer connecting the dots well enough.
Second, his argument that explainer journalism is helped by the success of digital journalism takes for granted the resources that have helped it succeed in the west and not so much in countries like India.
So these points make me wonder if explainer journalism can expect to be adopted with similar enthusiasm here – where, unsurprisingly, it is most relevant. Thinking of journalism as an “imported” enterprise in the country, differences both cultural and historical become apparent between mainstream English-language journalism and regional local-language journalism. They cater to different interests and are shaped by different forces. For example, English-language establishments cater to an audience whose news sources are worldwide, who can always switch channels or newspapers and not be worried about running out of options. For such establishments, How/Why journalism is a way to differentiate itself.
Local v. regional
On the other hand, local-language establishments cater to an audience that is not spoiled for options and that is dependent profoundly on Who/What/When/Where journalism no matter where its ‘reading diaspora’. For them, How/Why journalism is an add-on. In this sense, the localism that Ken Doctor probes in his piece has no counterpart. It is substituted with a more fragmented regionalism whose players are interested in an expanding readership over that of their own scope. In this context, let’s revisit one of his statements:
Local daily newspapers have traditionally been disproportionately in the Who/What/When/Where column, but some of that now-lost local knowledge edged its ways into How/Why stories, or at least How/Why explanations within stories. Understanding of local policy and local news players has been lost; lots of local b.s. detection has vanished almost overnight.
Because of explainer journalism’s reliance on digital and digital’s compliance with the economics of scale (especially in a market where purchasing power is low), what Doctor calls small, local players are not in a position to adopt explainer journalism as an exclusive storytelling mode. As a result of this exclusion, Doctor argues that what digital makes accessible – i.e. what is found online – often lacks the local angle. But it remains to be seen if this issue’s Indian counterpart – digital vs. the unique regional as opposed to digital vs. the small local – is even likely to be relevant. In other words, do smaller regional players see the need to take the explainer route?
Local-level journalism (not to be confused with what is practiced by local establishments) in India is bifocal. On the one hand, there are regional players who cover the Who/What/When/Where thoroughly. On the other, there are the bigger English-language mainstreamers who don’t each have enough reporters to cover a region like India thanks, of course, to its profuse fragmentation, compensating instead by covering local stories in two distinct ways:
as single-column 150-word pieces that report a minor story (Who/What/When/Where) or
as six-column 1,500-word pieces where the regional story informs a national plot (How/Why),
—as if regional connect-the-dots journalism surfaces as a result of mainstream failures to bridge an acknowledged gap between conventional and contextualizing journalism. Where academicians, scholars and other experts do what journalists should have done – rather, in fact, they help journalists do what they must do. Therefore, readers of the mainstream publications have access to How/Why journalism because, counter-intuitively, it is made available in order to repair its unavailability. This is an unavailability that many mainstreamers believe they have license to further because they think the ‘profuse fragmentation’ is an insurmountable barrier.
There’s no history
The Hindu and The Indian Express are two Indian newspapers that have carved a space for themselves by being outstanding purveyors of such How/Why journalism, and in the same vein can’t be thought of as having succumbed to the historical basis that makes the case for its revival—“Why fix something that ain’t broken?”. And the “top-drawer” publications such as The New York Times and The Washington Post that Doctor mentions that find a need to conspicuously assert this renewal are doing so on the back of the technology that they think has finally made the renewal economically feasible. And that the Times stands to be able to charge a premium for packaging Upshot and its other offerings together is not something Hindu or Express can also do now because, for the latter couple, How/Why isn’t new, hasn’t been for some time.
Therefore, whereupon the time has come in the western mainstream media to “readopt” explainer journalism, its Indian counterpart can’t claim to do that any time soon because it has neither the west’s historical nor technological bases. Our motivation has to come from elsewhere.
This mail is not intended to be an apology as much as my own acknowledgment of my existence. Of late, I have become cognizant of what a significant role writing, and having my writing read, plays in the construction of my self-awareness – whether profound or mundane. Even as I live moments, I do not experience them with the same clarity and richness as I do when I write about those moments. Why, I don’t pause to think about something – anything – as much as when I do when I place commas and periods. I don’t recognize possession unless it comes with an apostrophe.
To some extent, this has slowed down the speed with which I can take on life in all its forms and guises; the exhilaration is more prodigious, and the conclusions and judgments more deliberated. To someone standing next to me, in a moment I would later discover to have been the host of an epiphany, I come across as detached and indifferent, as someone lacking empathy. But I have empathy, sometimes too much, at others even suffocating. However, I haven’t bothered explaining this to anyone… until now. And why do I choose to tell you? Because you will read me. You are reading me.
So much has kindled an awareness also of what each word brings with itself: a logbook of how memories have been created, recorded and recollected over centuries of the language’s existence. You read sentences from left to right, or right to left or top to bottom depending on the language, and you are attributing the purpose of the words you’re parsing to your interpretation of the text. Now, break the flow: go orthogonal and move your eyes in a direction perpendicular to the one that unlocks meaning. Suddenly, you are confronted with words – individual, nuclear words – silently staring at you. Isn’t it a scary sight to look at symbols that suddenly seem devoid of meaning or purpose?
Inky scratches on paper. Like what a prisoner in a high-security prison does with his nails on the walls after years of crippling solitude.
Count how many times each such word appears on the page, in the book, in all the books you own, in all the books that have ever existed. Each such word, whatever it is, has been invoked to evoke multiplets of emotions. Each such word has participated in all from the proclamation that burnt down Nero’s Rome to the one that ended slavery in Western civilization, from Anthony’s selfless lament to Nietzsche’s self-liberating one. Words have not been used but repeated to simply put together a finite number of intentions in seemingly infinite ways. Each such word gallantly harbors a legacy of the need for that word.
As Roland Barthes writes in Camera Lucida, look at a portrait photograph of Napoleon Bonaparte’s youngest brother, Jerome, taken in 1852. Imagine looking into the brother’s eyes – and tell yourself that you are now looking into the eyes that once looked into Napoleon’s. Don’t you feel a weight from the sensation that what you’re looking at may contain a scar from where a powerful man’s stare etched into? I feel a similar weight when I use words; I feel a constant reminder ringing in my head about using them in such a way that preserves their dignity, their heritage. I feel that there is wisdom in their shapes and strokes. It calms me deeply, just like a ritual and its processes might.
And when such legacies are brought to bear on every experience of mine – howsoever trivial – I can’t help but become addicted to their reassuring wisdom, their reassuring granular clarity. When writing with such words, I am more pushed to re-evaluate whatever it is that I am saying, more encouraged to plumb the murkier depths of my conscience that are closed to simpler wordless introspection. When I write, I feel like I finally have the tools I have long yearned for to build strong character, and find inner peace when I seek for it the most.
This is pretty cool. Twitter user @jamiebgall tweeted this picture he’d made of the Periodic Table, showing each element alongside the nationality of its discoverer.
It’s so simple, yet it says a lot about different countries’ scientific programs and, if you googled a bit, their focuses during different years in history. For example,
A chunk of the transuranic actinides originated out of American labs, possibly arising out of the rapid developments in particle accelerator technology in the early 20th century.
Hydrogen was discovered by a British scientist (Henry Cavendish) in the late 18th century, pointing at the country’s early establishment of research and experiment institutions. UK scientists were also responsible for the discovery of 23 elements in all.
The 1904 Nobel Prizes in physics and chemistry went to Lord Rayleigh and William Ramsay, respectively, for discovering four of the six noble gases. One of the other two, helium, was co-discovered by Pierre Janssen (France) and Joseph Lockyer (UK). Radon was discovered by Friedrich Dorn (Germany) in 1898.
Elements 107 to 112 were discovered by Germans at the Gesselschaft fur Schwerionenforschung, Darmstadt. Elements 107, 108 and 109 were discovered by Peter Armbruster and Gottfried Munzenberg in 1982-1994. Elements 111 and 112 were discovered by Sigurd Hoffman, et al, in 1994-1996. All of them owed their origination to the UNILAC (Universal Linear Accelerator) commissioned in 1975.
The discovery of aluminium, the most abundant metal in the Earth’s crust, is attributed to Hans Christian Oersted (Denmark in 1825) even though Humphry Davy had developed an aluminium-iron alloy before him. The Dane took the honours because he was the first to isolate the metal.
Between 1944 and 1952, the USA discovered seven elements; this ‘discovery density’ is beaten only by the UK, which discovered six elements in 1807 and 1808. In both countries, however, these discoveries were made by a small group of people finding one element after another. In the USA, elements 93-98 and 101 were discovered by teams led by Glenn T. Seaborg at UCal, Berkeley. In the UK, Lord Rayleigh and Sir Ramsay took the honours with the noble gases section.
I have never been able to fathom poetry. Not because it’s unensnarable—which it annoyingly is—but because it never seems to touch upon that all-encompassing nerve of human endeavour supposedly running through our blood, transcending cultures and time and space. Is there a common trouble that we all share? Is there a common tragedy that is not death that we all quietly await that so many claim is described by poetry?
I, for one, think that that thread of shared memory is lost, forever leaving the feeble grasp of our comprehension. In fact, I believe that there is more to be shared, more to be found that will speak to the mind’s innermost voices, in a lonely moment of self-doubting. Away from a larger freedom, a “shared freedom”, we now reside in a larger prison, an invisible cell that assumes various shapes and sizes.
Sometimes, it’s in your throat, blocking your words from surfacing. Sometimes, it has your skull in a death-grip, suffocating all thoughts. Sometimes, it holds your feet to the ground and keeps you from flying, or sticks your fingers in your ears and never lets you hear what you might want to hear. Sometimes, it’s a cock in a cunt, a blade against your nerves, a catch on your side, a tapeworm in your intestines, or that cold sensation that kills wet dreams.
Today, now, this moment, the smallest of freedoms, the freedoms that belong to us alone, are what everyone shares, what everyone experiences. It’s simply an individuation of an idea, rather a belief, and the truth of that admission—peppered as it is with much doubt—makes us hold on more tightly to it. And as much as we partake of that individuation, like little gluons that emit gluons, we inspire more to pop into existence.
Within the confines of each small freedom, we live in worlds of our own fashioning. Poetry is, to me, the voice of those worlds. It is the resultant voice, counter-resolved into one expression of will and intention and sensation, that cannot, in turn, be broken down into one man or one woman, but only into whole histories that have bred them. Poetry is, to me, no longer a contiguous spectrum of pandered hormones or a conflict-indulged struggle, but an admission of self-doubt.
Curiosity can be devastating on the pocket. Curiosity without complete awareness has the likelihood of turning fatal.
At first, for example, there was nothing. Then, there was a book called The Feynman Lectures on Physics (Vol. 3) (Rs. 214) in class XII. Then, there was great interest centered on the man named Richard Feynman, and so, another book followed: Surely You’re Joking, Mr. Feynman! (Rs. 346) By the time I’d finished reading it, I was introduced to that argumentative British coot named David Hume, whose Selected Essays (Rs. 425) sparked my initial wonderment on logical positivism as well as torpor-inducing verbosity (in these terms, his only peer is Thomas Pynchon (Against the Day, Rs. 800), and I often wonder why many call for his nomination for a Nobel Prize in literature. The Prize is awarded to good writers, right? Sure, he writes grandiose stuff and explores sensations and times abstract to everyone else with heart-warming clarity, but by god do you have to have a big attention span to digest it! In contrast: Vargas Llosa!).
I realized that if I had to follow what Hume had to say, and then Rawls, and then Sen (The Idea of Justice, Rs. 374) and Kuhn (The Structure of Scientific Revolutions, Rs. 169 – the subject of my PG-diploma’s thesis) and Kant, and then Schopenhauer, Berkeley and Wittgenstein, I’d either have to study philosophy after school and spend the rest of my days in penurate thought or I’d have to become rich and spend the rest of my days buying books while not focusing on work.
An optimum course of action presented itself. I had to specialize.
But how does one choose the title of that school of thought that one finds agreeable without perusing the doctrines of all the schools on offer? I was back to square one. Then, someone suggested reading The Story of Philosophy (Rs. 230) by Will Durant. When I picked up a copy at a roadside bookstore, I suspected its innards had been pirated, too: the book would have been more suited in the hands of one in need of a quick-reference tool; the book didn’t think; the book wasn’t the interlocutor I was hoping it would be.
I wanted dialogue, I wanted dialectic in the context of Heinrich Moritz Chalybäus‘ thesis (Systems of Speculative Ethics as translated by Alfred Edersheim, 1854 – corresponding to System of Speculative Philosophy by G.W.F. Hegel). I wanted the evolution of Plato (The Republic, Rs. 200), Aristotle (Poetics, Rs. 200), Marcus Aurelius (Meditations, Rs. 200). That was when I chanced upon George Berkeley’s Principles of Human Knowledge (Rs.225)and Three Dialogues Between Hylas and Philonous (Rs. 709). Epistemology began then to take shape; until that moment, however, it was difficult to understand the inherently understood element as anything but active-thought. It’s ontology started to become clear – and not like it did in the context of The Architecture of Language by A. Noam Chomsky (Rs. 175), which, to me, still was the crowning glory of naturalist thought.
Where does the knowledge, “the truth”, of law arise from? What is the modality within which it finds realization? Could there exist an epistemological variable (empirically speaking) the evaluation of which represents a difference between the cognitive value of a statement of truth and that of a statement of law? Are truths simply objective reasons whose truth-value may or may not be verifiable?
Upon the consumption of each book, a pattern became evident: all philosophers, and their every hypothesis, converged on some closely interrelated quantum mechanical concepts.
Are the mind and body one? Does there exist an absolute frame of reference? Is there a unified theory at all?
Around the same time, I came to the conclusion that advanced physics held the answers to most ontological questions – as I have come to understand it must. Somewhere-somewhen in the continuum, the observable and the unobservable have to converge, coalesce into a single proto-form, their constituents fuse in the environment afforded them to yield their proto-reactants. Otherwise, the first law of thermodynamics would stand violated!
However, keeping up with quantum mechanics would be difficult for one very obvious reason: I was a rookie, and it was a contemporary area of intense research. To solve for this, I started with studying the subject’s most pragmatic parts: Introduction to Quantum Mechanics by Powell & Crasemann (Rs. 220), Solid State Physics by Ashcroft & Mermin (Rs. 420), Quantum Electrodynamics by Richard Feynman (Rs. 266), and Electromagnetic Systems and Radiating Waves by Jordan & Balmain (Rs. 207) were handy viaducts. Not like there weren’t any terrors in between, such as Lecture Notes on Elementary Topology and Geometryby Singer & Thorpe.
At the same time, exotic discoveries were being made: at particle colliders, optical research facilities, within deep space by ground-based interstellar probes, within the minds of souls more curious than mine. Good for me, the literature corresponding to all these discoveries was to be found in one place: the arXiv pre-print servers (the access to which costs all of nothing). These discoveries included quantum teleportation, room-temperature superconductivity, supercomputers, metamaterials, and advancements in ferromagnetic storage systems.
(I also was responsible for discovering some phenomena exotic purely to me in this period: cellular automata and computation theory – which I experimented with using Golly and Mirek’s Cellebration, and fuzzy logic systems and their application in robotics – experimented with using the Microsoft Robotics Developer Studio.)
What did these discoveries have to do with Hume’s positivism? That I could stuff 1 gigabyte’s worth of data within an inch-long row of particles championed empiricism, I suppose, but beyond that, the concepts’ marriage seemed to demand the inception of a swath of interdisciplinary thought. I could not go back, however, so I ploughed on.
A Brief History of Time (Rs. 245) did not help – Hawking succeeded splendidly in leaving me with more questions than answers – (Gravitation and Cosmology: Principles and Applications of the General Theory of Relativity by Steven Weinberg (Rs. 525) answered some of them), The Language Instinct by Harvard-boy Steven Pinker (Rs. 450) charted the better courses of rationality into sociology and anthropology (whereas my intuition that Arundhati Roy would reward governance with a similar fashion of rational unknotting was proved expensively very right: Algebra of Infinite Justice, at Rs. 302, lays bare all the paradoxes that make India India).
For literature, of course, there were Orhan Pamuk and Umberto Eco, Lord Tennyson and Sylvia Plath, de Beauvoir, le Guin and Abbott (My Name is Red (Rs. … Whatever, it doesn’t matter!), The Name of the Rose, and The Mysterious Flame of Queen Loanaare to be cherished, especially the last for its non-linear narration and the strange parallels waiting to be drawn with hermeneutics, such as one delineated on by E.H. Carr in his What Is History?) to fall in love with (Plath’s works, of course, were an excursion into the unexplored… in a manner of speaking, just as le Guin’s imagination and Abbott’s commentary are labours unto the familiar).
Learn to like ebooks. Or turn poor.
Ultimately, that was all that I learnt. Quite romantic though that being an autodidact may sound, the assumption of its mantle involves the Herculean task of braiding all that one learns into a single spine of knowledge. The more you learn, the farther you are from where you started, the even more you have learnt, the more ambitious you get… I cannot foresee an end.
Currently, I am reading One Day in the Life of Ivan Denisovich by Soviet-era exile Alexander Solzhenitsyn (war-time dystopian fiction became a favourite along the way after reading a history of firearms in Russia, a history of science and technology in Islam, How Things Work gifted to me by my father when I was 11, and Science and Civilisation in China by Needham & Gwei-Djen (Rs. 6,374 – OK, now it matters)) and Current Trends in Science: Platinum Jubilee Edition – Indian Academy of Sciences, lent to me by Dr. G. Baskaran. At each stage, a lesson to be learnt about the universe is learnt, a minuscule piece told in the guise of one author’s experiences and deductions to fit into a supermassive framework of information that has to be used by another’s intelligence. A daunting task.