“I take a different view of science as a method; to me, it enters the human spirit more directly. Therefore I have studied quite another achievement: that of making a human society work. As a set of discoveries and devices, science has mastered nature; but it has been able to do so only because its values, which derive from its method, have formed those who practice it into a living, stable and incorruptible society. Here is a community where everyone has been free to enter, to speak his mind, to be heard and contradicted; and it has outlasted the empires of Louis XIV and the Kaiser. Napoleon was angry when the Institute he had founded awarded his first scientific prize to Humphry Davy, for this was in 1807, when France was at war with England. Science survived then and since because it is less brittle than the rage of tyrants. This is a stability which no dogmatic society can have. There is today almost no scientific theory which was held when, say, the Industrial Revolution began about 1760. Most often today’s theories flatly contradict those of 1760; many contradict those of 1900. In cosmology, in quantum mechanics, in genetics, in the social sciences, who now holds the beliefs that seemed firm fifty years ago? Yet the society of scientists has survived these changes without a revolution and honors the men whose beliefs it no longer shares. No one has been shot or exiled or convicted of perjury; no one has recanted abjectly at a trial before his colleagues. The whole structure of science has been changed, and no one has been either disgraced or deposed. Through all the changes of science, the society of scientists is flexible and single-minded together and evolves and rights itself. In the language of science, it is a stable society.”
Jacob Bronowski, Science and Human Values, Chapter 3: The Sense of Human Dignity
The idea of creating a future that is favorable for our descendants is in vogue today.
But before even thinking about creating an ideal future, we must address a more fundamental problem about the limits of what is knowable.
And that is the impact of the growth of knowledge.
All knowledge is conjectural and our theories and their predictions are 100% error-prone. That doesn’t mean they tell us nothing about the world. But they are not infallible.
Scientific knowledge is predictive knowledge. This defining characteristic of science allows mistaken theories to be corrected upon making false predictions. This, in turn, allows for an improvement in our ability to predict.
But the future ideas and actions of people are physically impossible to predict, since both depend on the growth of human knowledge.
Since the future depends on the content of knowledge yet to be created, content that cannot possibly be known today, we will never understand what future people will want. For if there were a method to predict some piece of knowledge that is only going to be discovered next year, then by using that method we will have gained that knowledge today. A contradiction.
II.
We ought to separate a prediction from a prophecy here. A prediction is a logical consequence of a scientific theory where we can explain why human choice will have no impact. But if one tries to guess at times when knowledge creation will have an impact, they are attempting prophecy.
A prophecy does not become a prediction if an “expert” makes it using “science”. For example, we know from our best scientific theories that the Sun will continue to shine for another 5 billion years or so, after which it will have utilized its fuel and turn into a red giant star. This would be doom for any life on the Earth, as the Sun would engulf and destroy its neighboring planets. Or would it? If any of our descendants decide to stay on the Earth at that time, they might do everything in their power to prevent it. Of course, our technology today would prevent us from pulling off such a feat, nor is it inevitable that our descendants will overcome such a challenge.
“The color of the Sun ten billion years hence depends on gravity and radiation pressure, on convection and nucleosynthesis. It does not depend at all on
the geology of Venus, the chemistry of Jupiter, or the pattern of craters on the Moon. But it does depend on what happens to intelligent life on the planet Earth. It depends on politics and economics and the outcomes of wars. It depends on what people do: what decisions they make, what problems they solve, what values they adopt, and on how they behave towards their children.”
— David Deutsch, The Fabric of Reality, Chapter 8: The Significance of Life
Given that the future of humanity is unknowable, what ought we do in order to create a favorable future for our descendants (if anything at all)?
III.
Our inability to predict the growth of knowledge is the only impediment in our ability to predict the future. And hence, we do not know what future people will want.
This begs some serious considerations for moral philosophies of altruism, such as that of “longtermism” as expressed by Will McAskill in his popular book, What We Owe the Future.
Altruists adopt a view that morality is essentially about treating the interests of others. McAskill and the longtermist add future people to this calculation arguing that those lives that do not yet exist also matter.
Such a morality necessarily implies that people that exist today need to sacrifice for the people of tomorrow. For example, they argue that we have to ensure that the climate is maintained for fear that the future people are going to live in a world that is worse off. This argument amounts to calling for restrictions on which kinds of knowledge people are allowed to create and act upon in the present.
There are many problems with such a tragic view of morality. Firstly, longtermism does not fully take progress in moral understanding into account. An assumption is made that the values of future generations will be the same as those in the present. But people are fallible: their moral knowledge is laden with errors just as their scientific knowledge is. We ought to hope that the morality of our descendants is utterly alien to our own because it may be better than our own.
If longtermists existed back when blacks were widely regarded as morally inferior to whites, would the moral calculus of the longtermists have included the prosperity of future blacks or not? It seems like it couldn’t possibly have included that. More generally, longtermism can’t take into account progress in moral knowledge, nor what future generations will choose to value. Longtermists impose their values onto future generations. They are time imperialists.
Another issue is that if altruistic morality is taken to its logical conclusion, then everyone would be trying to solve everyone else’s problems. How could that possibly be more effective than everyone trying to solve their own problems?
The notion that sacrifice is good is pervasive. But caring for others rather than for yourself creates more problems than it solves. Morality is the question about “What should be done next?” not “Who should be helped next?”
If we are here to help others, what on Earth are the others here for?
IV.
What we actually need to be is selfish, not altruistic. We need to make as rapid progress as possible so that the people of the future themselves will be at a starting point where they can make even more rapid progress.
The first line in that previous paragraph can really put off some people. When a philosophy is latent with moralistic language it gets hard to error-correct. This is because those who espouse said ideas can tend to presume they’re morally superior to those who disagree with them. “Oh, you don’t want to slow down progress? That means you don’t care about the lives of future people.”
Usually, people who have such a view of morality are essentially religious fanatics but aren’t using the term religion. They’ve created a modern religion in which man is cast as the devil, and there is no God and no savior. They’ve obviously removed the traditional trappings around it. But they still hold on to the same underlying core, moralizing beliefs of religion.
They forget that moralizing itself is no argument.
V.
Selfishness is not callousness and altruism is neither kindness nor generosity. Altruism is subordinating one’s own preferences to those of others. It’s a zero-sum game. It’s not win-win. Selfishness is being concerned about myself, precisely because I am a good person. In being concerned about myself, my welfare, my wealth, and my happiness, I become the kind of person to help others, not at a cost to myself, but rather by being involved in win-win relationships. That’s the key. With selfishness, I want to win, but not so you lose. That’s callousness. I am selfish so someone else can win, too.
People who focus on themselves and their own problems make faster progress than those who aspire to “do what’s right” despite experiencing internal resistance to do so. Those who choose careers in order to have a positive impact on the world, even when a part of them desperately wishes that they do otherwise, will struggle to make progress. And ironically, such choices cause more suffering in the present (namely, that of the altruist).
We need to solve problems that genuinely interest us in order to make progress as fast as possible. It’s the best thing for everyone—including those yet to exist in this world.
Taking wealth away from where progress is happening fastest and gifting it to where it’s not is going to hurt more people than it ever helps.
Last month, OpenAI introduced ChatGPT to the world. The chatbot immediately took the Internet by storm, crossing a million users in less than 5 days of its research launch.
There’s a ton of excitement around its impressive capacities. Scientists, journalists, writers, programmers, teachers, students, people-working-jobs-that-it-threatens, and of course, AI researchers—everyone’s been talking about the new chatbot.
But there’s a fundamental difference between a tool like ChatGPT and the much-maligned artificial general intelligence (AGI) that fearmongers fail to appreciate. Something like ChatGPT does not get us any closer to attaining general intelligence.
Though the number of transistors, memory, and speed of our computers have seen exponential growth over the years, AGI won’t emerge as a product of this ever-increasing complexity.
“An AGI is qualitatively, not quantitatively, different from all other computer programs…Expecting to create an AGI without first understanding in detail how it works is like expecting skyscrapers to learn to fly if we build them tall enough.”
The actual barrier to developing artificially intelligent entities is not a breakthrough in computer science but in philosophy.
The problem is that we don’t understand how creativity works. We humans have it. The ability to think and create new explanations about what is out there and how it works is the fundamental functionality that separates us from animals. But we haven’t yet been able to explain how creativity functions.
None of our programs have ever created a new explanation—something entirely different from what they were coded to do[1]. They can create new content by interpolating between a dense manifold of training data. But that will always be constrained to existingknowledge.
There’s the crux of it. Until we find a way to explain creativity in people, we’re only going to be able to create programs that obey their programmers. People admire ChatGPT as an AI, but they should really admire the programmers who wrote it[2].
An AGI will be able to disobey and create novel theories in its mind. It will be able to rationally and creatively criticize ideas.
“But,” say the naysayers, “certain AI programs (including ChatGPT) can learn, can’t they?”
Not so. In spite of educational institutions upholding this belief, learning isn’t a product of instruction. Knowledge creation in the minds of people happens through an active process of conjecture and criticism. We don’t absorb ideas, nor do we copy them inside our brains. Rather, we recreate knowledge using the method of trial and the elimination of error, as the epistemology of Karl Popper—that of critical rationalism—explains more deeply than any other philosophy of learning.
If a system has general intelligence, it will be able to take control over its own learning.
This obviates the specific concern for rogue AGI. AGIs could indeed be very dangerous. But humans are too. Since knowledge—scientific and moral—is objective, and an AGI would have the power to critically think about moral ideas and question their decisions, they’ll be able to converge upon moral truth with us and won’t be any more dangerous—and wonderful—than we humans are.
Footnotes
Credit for the ideas in this piece goes to David Deutsch with any errors of my own. In particular two of his essays “Creative Blocks” and “Beyond Reward and Punishment” were a heavy source of inspiration for this piece.
[1] This is an oversimplification for the case of readability. Transformers can create new content by interpolating between a dense manifold of training data. But arguably it will always only be a “mix-mash” of the existing data. You can read more about this here.
For this blog post, just sharing one of the simplest tweets I’m proudest of to have written; the thought of which still baffles me:
Ironic how we are trying to create computers that can “think for themselves” and at the same time sending kids to school—explicitly wanting them to follow the instructions—making them learn how to think like a machine.
Almost everything to do with the evidence of modern civilization is the manifestation of thought.
Think about it, from primitive huts to heaven-kissing skyscrapers, from Stone Age doodles to the Mona Lisa, the significance of the dollar bill, the footprints on the Moon—all is but a product of thought.
Applicably, the most productive and the most destructive of inventions trace their way back to some knowledge in a human mind.
In a legendary two-minute YouTube clip, Steve Jobs explains the “secrets of life”, essentially corroborating that we can change the world by rejecting a conventionally popular pessimistic notion about life. He says:
“… when you grow up, you tend to get told that the world is the way it is and your life is just to live your life inside the world, try not to bash into the walls too much, try to have a nice family, have fun, save a little money. But that’s a very limited life.
Life can be much broader, once you discover one simple fact, and that is, everything around you that you call life was made up by people that were no smarter than you. And you can change it, you can influence it, you can build your own things that other people can use…
… That’s maybe the most important thing – to shake off this erroneous notion that life is there and you’re just gonna live in it, versus embrace it, change it, improve it, make your mark upon it.
I think that’s very important and however you learn that, once you learn it, you’ll want to change life and make it better, ’cause it’s kinda messed up, in a lot of ways. Once you learn that, you’ll never be the same again.”
Steve Jobs, 1994, interview with Santa Clara Valley Historical Association
Ideas have the power to shape and mold the physical universe not unlike any other force of nature.
Hence, “the terrifying power of ideas”, as philosopher Karl Popper wrote, “burdens all of us with grave responsibilities. We must not accept or refuse them unthinkingly. We must judge them critically.”
This critically rational eye should apply to every idea. Including the pessimistic ones about the limitations of human agency and the reach of knowledge specifically. Progress depends on it.
How it works is simple: you set a time limit for yourself, and then you have to write as much as you can within that time frame. The catch? You can’t stop writing, even if you want to. If you pause for too long, the app will delete yourentirepiece. Talk about pressure!
Only once you’ve reached the end of your time limit can you save your work (and if you’re wondering… no, you can’t copy the text before the session ends!)
This app stops me from worrying too much about editing or formatting and simply gets to the meat of the task of me actually writing the first draft.
And for some reason it’s actually a lot of fun! Adds a gamifying effect to writing, if you like.
This is a good example of productive constraints where a constraint is providing a non-coercive rule for creativity to flourish.
I’ve been using the app ever since I was recommended I do in David Perell’s writing course for high schoolers, Write of Passage Liftoff.
If you’re struggling with writer’s block, or “perfectionism”, or just looking to have more fun with your writing process, challenge yourself with The Most Dangerous Writing App!
– Why is gravity still considered a force when it is known since the 1910s that gravity is just a manifestation of the curvature of spacetime? (H/T: Twitter)
– Will artificially intelligent entities be conscious? Even if they won’t, when an AI starts to look and talk like a person, won’t it be irresistible for humans to see it as conscious? We can doubt the consciousness of humans, but that’s purely an intellectual exercise. In reality, it’s irresistible to treat other people as having feelings and emotions. How might racism against AI look like? (Sparked by Sam Harris’ Making Sense of Artificial Intelligence.)
Recently, I went on a solo meditation retreat somewhere close to the Himalayas for 4 days.
The shadow of me sitting cross-legged on a rock.
It was a powerful experience and I am glad to have done it.
Since the time of planning and telling relevant people about my plans, this was looked at by some almost as a suicide mission.
A sixteen year old going all alone TO MEDITATE in a new city, a new state, among the cold mountains, with bad connection, chances of landslides, likelihood of being eaten by a leopard, getting kidnapped or worst of all—getting converted to becoming a full time monk and never returning home!
Why can’t you just meditate here?
I’ve always wanted to go to the Himalayas and just meditate there.
In a sort of joking manner, (I say “sort of” because I’m very well capable of doing this) I’ve felt for the longest time that becoming a monk is the best career option on the market.
Monks are like the happiest people in the world! And if nothing works out, I’mma go be a monk in the Himalayas.
The Himalayas.
The reason I actually wanted to go to the retreat was– well there was no particular reason. I just felt like going there one day. I wanted to go somewhere adventurous, somewhere peaceful. And I wanted to go there as soon as I could.
I realised that nothing was stopping me from actually doing it. If not become a full-time monk just yet, I could still go on a short retreat. With my own money. Entirely on my own.
And so exactly 21 days after the first thought that “I could just go”, I booked plane tickets to my desired location, and a not-so-fancy stay lodge for the period of time I intended to stay there and boom, it was happening.
I’m writing these words on the plane back “home” (I write home in scare quotes because I’ve been thinking a lot about this lately. What is home? Is it the place we spend most of our time at? Is it the place we’ve actually invested in buying an actual home and wholeheartedly crafted according to our whims? Is it the place where we get “the best food”? I don’t know. I think the world is my home and everywhere I go, it’s my duty to make it actually so.) That was a huge bracket.
As I write these words on my way back, I feel elated. And humbled. One of my big intentions from the retreat was to get freed again from vanity and recognize the fact that I was no one and no-thing. But of course, pride is such a thing that it is possible to take pride in one’s humility itself. And becoming humbled like most other things (e.g. being happy, calm or at peace) isn’t a one-time thing. It’s like becoming healthy or losing weight. One needs to consistently put in the work to REMAIN in that state.
The reason I feel elated is because I did something truly adventurous. It wasn’t an easy ride. I had my fair share (though reasonably few) of anxieties. Especially when the lights cut out on night #1 of the retreat. That was rough. But I tried remaining calm and that worked. An absence of photons in the visible wavelength of light is generally OK and something I can handle. Sure, the unfamiliarity of the situation caused some concern but if the world were to be my home, I must act like so. The point is, I undertook a serious challenge. And many of the concerns regarding my doing so by family and friends were probably for good reason.
What if something happens to you?
Well, then that would be way better than if nothing ever happened to me. That’s why I never let any of those concerns give me second thoughts about going on that adventure.
All in all, doing something bigger than myself is what makes me feel so elated right now. (Also, on the airport I caught the sight of someone who seemed to be backpacking India. I made up the courage and calm required to go start a conversation with her though she was reading—which is a very good excuse to not disturb a stranger you wish to speak to. “They’re busy.” But I went up and started talking anyway and found out some cool things about her and that again makes me feel quite good. There was no need even for the slightest hesitation. Talk to strangers, kids.)
Now let’s go into the specifics of the retreat and I’ll explain the humility aspect of it.
I experimented with non-dual mindfulness. Not for the first time did I do this but I really wanted to “get it” this time on the retreat and feel humbled by my true first person nature.
I used the Waking Up app created by Sam Harris and meditated for about 5 hours on day #1. The second day, I felt more lousy. The fact that it was raining outside made it a bit worse. On day #1, I went outside and sat on a rock and did my thing. I also walked very intentionally every now and then because you just can’t physically sit that long. Walking meditations are a thing. But on day #2, I had to stay indoors for some time and do the best I could there.
I was almost completely disconnected (but perhaps felt more connected than I ever had been). The only communication was with my parents whom I occasionally had to send updates to via texting. And the two people (and lone dog) at the stay lodge who gave me food and a little company when it got dark and cold.
Otherwise, there was no “work”. I made a resolution to reading no books, writing not for publishing, and of course no social media. I meditated, I ran, and I wrote in my journal. That’s all.
Me running.
I was there for four days and three nights but I really got only two days to meditate completely. It was peaceful otherwise too.
I’m not sure if I grasped the no-self point experientially. I always got it theoretically though. I just need to feel it. Maybe on my next retreat. Or during a “regular” day of meditation practice.
I want to maintain a composure of humility. I’ve been generally thought quite proud and boastful to an extent and I want to lessen that to a healthy degree. It will take consistent work. Which I’m ready for.
With the present retreat, I got time to rest. And I needed it, not gonna lie.
I’ve been being pulled in many directions for some time now and that stops me from literally doing anything. And so I while away my time checking email and my Twitter notifications for the 34th time of the day and get saddened upon not finding anything quite stimulating.
But without any of that, I actually could take each step on the retreat. And so I intend not to fall back to my frivolous habits when I get back “home”. Instead, I’ll intentionally carve out time blocks for certain activities and actually try very hard to follow them this time around.
I took pictures on my retreat but without caring much on how they came out. I’d just click them during the day and check them in the Photos app at night. I think there’s a certain way to take pictures without being entirely sucked up by the act of doing so and hence not being able to be present and actually enjoy the real view (which FYI is so much better than that on the camera). I tried to use that way of taking pictures. For the sake of capturing moments, as it were.
That’s pretty much it.
I’ll always be a work in progress and always at the beginning. Yet, this was very fun.
Until the next one. But for now, all I have is the here and now. Each step.
Note: I’m assuming you’ve already heard “journaling is good for you”. And that you’ve somewhat been convinced. Carry on if that’s the case. If not, spend half an hour watching “journaling” videos on YouTube and come back here to add meaningful structure to the habit that’s going to change your life.
I’ve been using some journaling / reflection frameworks recently for increased productivity, meaning and motivation in my life. I want to share those same frameworks so you can get value from them too. Hope this helps!
The daily
Every day I ask myself two questions (one in the morning and the other in the evening) as American polymath Benjamin Franklin would ask himself.
From Franklin’s diary. Source: The Autobiography of Benjamin Franklin
The morning question: What good shall I do this day? Evening question: What good have I done today?
I answer these in my physical notebook journal. You can do what suits you best. Pen and paper is good because it leaves aside all the distractions from your devices. If you can maintain a distance from the ease with which you click “new tab” on your laptop, you could use a digital note-taking system. Just FYI, a pen and paper also feels good.
The weekly
To track and reflect on the week I ask myself three questions (inspired by James Clear this time around)
What went well?
What didn’t go so well?
What did I learn?
I do this too in my physical notebook journal.
The monthly, quarterly and yearly
The same questions we asked for our weekly review can be duplicated into the monthly, the quarterly, and the yearly.
I don’t do monthly reviews. I think they’re too small a scale to really do something of sizeable quantity. They sort of overlap with the weekly ones. I do quarterly and yearly reviews. And I do these on my Notion database for easier access and organization even if I don’t open them for a while.
What I have to do vs. what I want to do
Productivity needn’t be cold. You can add intention to the things you do (and probably save your life) with the help of this framework.
Make a Venn diagram consisting of one circle with all the things you “have to” do. And the other circle will have all the things you “want to” do. The intersection part will be a combination of things you “want to” and “have to” do.
If there’s a lot of things in the “have to” do space and I figure I’m not doing many things I want to be doing, then I know there’s something wrong and I change my path accordingly.
Example diagram
There’s also another (more cold) framework called the Eisenhower Matrix that can be used to figure out perhaps in a simpler way what you need to get done. You essentially lay out things you have to do in terms of important / not important and urgent / not urgent. It’s a cool framework but I don’t use it because it doesn’t take the “want to” aspect into it; hence not making me think with a deeper layer of intentionality about what I’m doing.
I do the Venn diagram in my physical notebook journal. I carry my journal everywhere.
Conclusion
That’s all. These frameworks really help me. Feel free to duplicate them for your own use and make the best of the art of reflection.
Nullius in verba is our motto here. In Latin, it roughly means “take no one’s word for it”.
When people hear that phrase, a common argument that pops up against it is that you sometimes have to take someone’s word for it. You can’t personally recreate the whole of human knowledge now, can you?
There’s a few misconceptions in that argument because you actually DON’T have to “personally recreate the whole of human knowledge” to not take someone’s word for it.
Misconception #1:
You have to learn the whole of human knowledge to understand everything that can be understood.
You don’t! You don’t have to memorize and know every single fact to fundamentally understand everything that can be understood. Facts can be looked up. And predictions derived. If you understand the deep underlying theories behind everything, then you know at a high level how everything works. And this can all be understood by a single person.
As new theories succeed old ones, our knowledge becomes deeper as our fundamental theories become more general. Our deepest explanations and theories are becoming so integrated with one another that they can be understood only jointly.
We can have explanations that can reach the entire universe.
Note: this doesn’t mean one doesn’t need people. In his famous essay, “I, Pencil”, Leonard E. Reed argues that no one person—repeat, no one, no matter how smart—could create from scratch a small, everyday pencil. People’s minds are a bank of knowledge. We need access to them to solve more problems and for the act of creation to occur. Rather, even to create the most mundane of items such as a pencil. People are better working together, best if they do without any constraints on their thinking.
Misconception #2:
Seeking and understanding good explanations = taking someone’s word for it
This is a big one. If a person is trying to understand something by asking some “expert” a doubt, this (confused) way of thinking concludes that person is taking the expert’s word for it.
No!
That person is seeking an explanation. Not blindly taking the expert authority’s word for it. If the explanation is not satisfactory (i.e. not hard to vary) then that person will not take the expert’s statement into account. At least the rational person won’t.
For example: If the “expert” says, “To sleep well at night, switch on a strong white light that flashes right onto your closed eyes”… The person seeking good explanations will ask, “How will that affect my sleep in a positive way?”
That person wouldn’t blindly take the expert’s word for it.
Similarly, if a person who seeks good explanations asks a sleep expert: “How can I get better sleep?” and the expert answers: “Make sure not to expose yourself to any sort of blue light that would be flowing out of your digital devices 2 hours before bed. At night time, choose to switch on some calming red light while you perhaps do some physical book reading. Blue light suppresses the secretion of melatonin, a hormone that makes you sleepy. Artificial blue light makes your human brain think its morning, not shut-eye time. So it stops producing a lot of melatonin. A hormone that’s produced more of in darkness. All kinds of light suppress melatonin production but especially blue light. Red or dim yellow light does so the least and can actually be soothing while getting ready for bed.”
This answer would be taken satisfactorily by the seeker of good explanations. Perhaps the person might also ask for some research studies on the point. And this completely wouldn’t be taking anyone’s word for it. It would be understanding a good explanation behind why you shouldn’t expose your eyes to blue light before bed. [1]
Seeking and understanding good explanations ≠ taking someone’s word for it
Misconception #3:
Following someone’s instincts and intuitions = taking someone’s word for it
Now the critic might think, OK fine, but at urgent times when you have to listen to someone, that’s taking someone’s word for it. Like if you’re on a plane with Maverick from Top Gun and something goes wrong—and let’s assume you are a complete doofus when it comes to planes—and Maverick shouts at you to pull up a lever, you don’t question him but pull that lever!
Top Gun: Maverick
You’re supposedly taking his word for it by trusting his instincts.
But are you really? That choice to pull up the lever too is somewhat independent thinking. The person you’re with knows what he’s doing. Or rather you know or you suppose he knows what he’s doing in the literal split second time you have to think. You know that you’re dumb when it comes to planes and if you don’t follow Maverick’s orders you both might die.
Perhaps when Maverick tells you to pull the lever and as if it weren’t obvious enough, you ask him “Why?!” and he screams back, “We’ll both die if you don’t!” that should be enough if you a priori know that Maverick knows his stuff when it comes to planes and that you’re both in quite a lot of danger so following his orders seems to be the rational thing to do.
So when are you actually taking someone’s word for it?
Now it might seem like: OK, so when is someone actually taking someone’s word for it?
There are many instances of when people take someone’s word for it.
When they don’t seek good explanations and be satisfied with arguments from authority
When they mistake “it is written” as a justified explanation
When they mistake supposed intuition for explanation
When “trust” supersedes the need for a good explanation
When anything is out of bounds and accepted (perhaps obligatorily) as unquestionable
It’s the (completely different) ways in which people don’t take someone’s word for it, by…
seeking good explanations,
understanding deep underlying theories,
accounting trust for a good enough explanation
that removes the need for taking someone’s word for it.
And so stands tall our motto:
Nullius in verba
Endnotes
[1] – This wasn’t an example of a direct explanation. Rather some advice built from a good explanation about human sleep. This was due to the nature of the question: “How can I get better sleep?”