What criteria can be used to distinguish between correlation and cause & effect 


First, it is important to know what correlation and cause & effect means to be able to distinguish between them

Correlation definition:

Correlation is a mutual relationship or connection between two or more things. Generally it is the degree to which one phenomenon or a random variable is associated with or can be predicted from another.

In statistics, correlation usually refers to the degree to which a linear relationship exists between random variables. Correlation may be positive or negative or inverse:

both variables increase or decrease together.

one variable increases when the other decreases.


 Cause & effect definition:

Cause and effect is the principle of causation. It is making something occur, or is the underlying reason why something happened.
Several factors may be associated with a potential disease causation or outcome, including
predisposing factors
enabling factors
precipitating factors
reinforcing factors
risk factors


Very often, the correlation and cause & effect get mixed up. This is either due to  a misunderstanding or to provide a plausible explanation for a scientific observation. Therefore, it is very important to be able to understand the difference between the two concepts.

Causation involves correlation, this means that if there is a cause and effect then they are correlated. However, correlated events can be caused by a an underlying cause, which means they do not necessarily cause each other, another factor not explicitly mentioned does (a common cause). Just because two events occur together does not imply that one is the cause of the other or that without one  event occurring, the other would not happen.

hidden factor

correlation vs causation comic

 The more solid the correlations, the more likely they are to imply causation.

Eg. the link between smoking and cancer. The correlation between the incidence of cancer and smoking is strong enough that most today consider this to be a cause and effect relationship.

Smoking causes cancer, but cancer does NOT lead to smoking.


Relation of sickle cell anemia and malaria

There is a correlation between high frequencies of the sickle-cell allele in human populations and high rates of infection with Falciparum malaria. Where a correlation exists, it may or may not be due to a casual link.


A causal link is where the Independent Variable has a direct impact on the Dependant Variable.

Sickle cell anemia is when there is a mutation in the genes when the sixth codon is mutated from GAG to GTG. When this allele is transcribed, the mRNA now has GUG as its sixth codon. When this amino acid is coded for, it creates valine instead of glutamic acid. This causes hemoglobin molecules to stick together in tissues with low oxygen concentrations. The hemoglobin molecules distort the red blood cells into a sickle shape.

sickle and red blood cells

The frequency of the sickle-cell allele is correlated with the prevalence of malaria in many parts of the world. Therefore, there IS a causal link. There has clearly been natural selection in favour of the sickle-cell allele in malaria ridden areas, despite it causing severe anemia to red blood cells. Natural selection has led to particular frequencies of the sickle-cell and the normal hemoglobin alleles, to balance the risks of anemia and malaria and this is cause & effect.

In conclusion the criteria used to distinguish between correlation and cause & effect is to differentiate whether both effects that are correlated were caused for a particular and logical reason, without any underlying causes (directly related). And if not, then these effects only show correlation and no causality.

All examples of Correlation vs Cause and Effect are shown in the videos below to visually and aurally help you understand the two.

TEDx Talks – The danger of mixing up causality and correlation:

Ionica Smeets

Khan Academy – Correlation and Causality

Correlation Does NOT Imply Causation



To This Day Project – Shane Koyczan

“My experiences with violence in schools still echo throughout my life but standing to face the problem has helped me in immeasurable ways. Schools and families are in desperate need of proper tools to confront this problem. This piece is a starting point.”


You may be asking yourself: What is bullying? And according  to the Oxford English Dictonary it means “to use superior strength or influence to intimidate (someone), typically to force them to do something“. 

Personally, I consider – as stated on many websites – bullying  to be an  unwanted, aggressive behaviour that involves a real or perceived power imbalance. The behavior is repeated, or has the potential to be repeated, over time. The consequences for both the bully and bullied are serious lasting problems, either psychological or physical.

Bullying includes actions such as:

making threats

spreading rumours

physically or verbally attacking someone

excluding someone from a group on purpose

There are FOUR types of bullying:

1) Verbal bullying: is saying or writing mean things. It includes:

  • Teasing
  • Name-calling
  • Inappropriate sexual comments
  • Taunting
  • Threatening to cause harm

words hurt

2) Social bullying or relational bullying: hurting someone’s reputation or relationships. It includes:

  • Leaving someone out on purpose
  • Telling other children not to be friends with someone
  • Spreading rumors about someone
  • Embarrassing someone in public


3) Physical bullying: hurting a person’s body or possessions. It includes:

  • Hitting/kicking/pinching
  • Spitting
  • Tripping/pushing
  • Taking or breaking someone’s things
  • Making mean or rude hand gestures


4) Cyberbullying: bullying that takes place using electronic technology. It includes:

  • Mean text messages or emails
  • Rumours sent by email or posted on social networking sites
  • Embarrassing pictures, videos, websites, or fake profilesTaking or breaking someone’s things

Kids who are being cyberbullied are more often than not, bullied in person as well. They also have a harder time getting away from the behaviour for the following reasons:

  • Cyberbullying can happen at any time, and reach a kid even when he or she is alone
  • Cyberbullying messages and images can be posted anonymously and distributed quickly to a very wide audience, making it difficult and sometimes impossible to trace the source.
  • Deleting inappropriate or harassing messages, texts, and pictures is extremely complicated after they have been posted or sent.


Watching the To This Day Project video inspired me to think more in depth of how bullying can scar someone forever, lasting still in their adult lives.

“to this day
despite a loving husband
she doesn’t think she’s beautiful
because of a birthmark
that takes up a little less than half of her face
kids used to say she looks like a wrong answer
that someone tried to erase
but couldn’t quite get the job done
and they’ll never understand
that she’s raising two kids
whose definition of beauty
begins with the word mom
because they see her heart
before they see her skin
that she’s only ever always been amazing”

To picture a lady who is loved by her husband and two children infinitely because they don’t care about her appearance, but rather “her heart”, which is what really matters and makes a person beautiful, but still doesn’t consider herself beautiful really shows the impact bullying can have on someone’s perspective of life.

When people bully, I think they don’t understand how they are are affecting someone’s life and the consequences their actions have. I’m not sure if it is because I’m older and therefore become more aware of  what is occurring in the world, or if it is this generation of kids, but many teenagers have committed suicide to escape the torment they experienced. What kind of world do we live in? One where being dead is better than reality is unacceptable for me. Why are kids doing this to each other? It is NOT funny, and it should definitely NOT be tolerated.

The following video in particular really hit close to home because having a brother with Down Syndrome has made me quite sensitive to the way people treat others and how people who are “different” due to race, disability, religious belief, etc. are treated with disrespect as a consequence. Just picturing someone bully my brother for the way he was born, disregarding what a kind, compassionate, and overall amazing human being he is truly upsets me. And to know that someone like him is being bullied is so unfair and illogical to me that it is hard for me to believe people can be so cruel. That is why I don’t want to see anyone experience bullying. LET’S STOP BULLYING.

“Let’s stop bullying for all”

“Let’s stop bullying forever”

“Let’s stop bullying together”

Here are some links to help those suffering from bullying as I find it important for not only children, but anyone to be able to find any kind of support when going through such a tough time. Also, any assertive up-standers can help spread the word of anti-bullying by joining the campaigns from the links provided. One of the most important things to know, is that YOU ARE NOT ALONE.


Do you consider faith to be a Way Of Knowing? Would it be more accurate or clearer, to call it a Way Of Believing? Compare faith in this regard with emotion, memory and sense perception.


Faith is a new way of knowing in Theory of Knowledge. It does not rely on proof and often clashes with knowledge provided. Therefore, to some it shouldn’t be considered a Way of Knowing (WOK). For other people, faith is the basis of their whole lives.


1 complete trust or confidence in someone or something: this restores one’s faith in politicians

2 strong belief in the doctrines of a religion, based on spiritual conviction rather than proof: bereaved people who have shown supreme faith, a particular religion: the Christian faith, a strongly held belief: men with strong political faiths

Faith doesn’t necessarily have to involve religion – as indicated by the first definition, it can just indicate a close affiliation or trust in someone, an organisation, or a movement.

 The term “faith” is most frequently used to refer to religious faith, but can also be used in a nonspiritual sense as a synonym for trust. For many people, faith is a key way in which they can try to understand and explain the world. People argue that faith is an act of trust and an example of knowledge that is not evidence based. In some traditions belief that is not based on evidence is seen as superior to belief that is based on evidence, the demand for concrete evidence therefore signifying a lack of faith.


Faith can be considered a Way of Knowing for some people and others deem it would be more accurate to call it a Way of Believing. Personally, I consider faith to be more accurately depicted as a Way of Knowing, since saying it is anything else could complicate the idea that faith can aid in the gaining of knowledge and can even be considered knowledge in a religious context. It  would be insulting to call faith a Way of Believing, since faith is not the same as believing, reducing its value. Belief is subjective, often unreliable and something that can change in a matter of seconds. On the other hand, faith is constant, and mostly permanent (a more profound version of belief). Faith might mean everything to an individual who seeks knowledge and truth through this medium. An outsider might not consider what they discover through faith to be invalid due to the lack of substantial support, yet at the same time it is knowledge, achieved through faith. From this point of view, faith is quite similar to knowledge and what it stands for. Therefore, considering faith as a Way of Knowing is a lot more clearer than considering it a Way of Believing, since faith is basically knowledge.

Here is an inspirational video on faith:


DC Facial Emotions

Emotion can be connected to faith since it can often guide how a person feels. Having faith on something that will have a positive outcome will mean that the emotions towards what faith has been put in will be optimistic. If a person does not have faith in an issue that causes stress, the reaction will be to develop negative emotions, such as outrage if it goes against a person’s faith.

 “My faith helps me overcome such negative emotions and find my equilibrium.”

Dalai Lama



Memories can help us foreshadow events based on previous incidents. This involves faith, since nothing can be known to happen for certain until it does. The faith in memory is that we can create a better future from what we know and on past experiences.

 “Today, I know that memories are the key not to the past but to the future. I know that the experiences of our lives, when we let God use them, become the mysterious and perfect preparation for the work he will give us to do.”

      Corrie ten Boom


senseSense perception based on any of the faculties such as sight, smell, hearing, taste, and touch, by which our bodies can perceive an external stimulus, allows us to gain knowledge. The knowledge gained from our senses is based on solid fact of what we perceive in order to arrive at a conclusion. Nevertheless, our senses can be tricked through optical illusions. This means that to fully trust our senses to form conclusions on what occurs around us we must have faith in them up to certain extent. We are constantly double-checking that our senses are picking up evidence correctly. We can also argue that sense perception and faith oppose each other since faith is believing in something we do not sense, while sense perception is processing a stimulus, providing knowledge with evidence.

 “Faith indeed tells what the senses do not tell, but not the contrary of what they see. It is above them and not contrary to them.”

Blaise Pascal

DNA double helix

Who were Watson and Crick? What is the relative role of competition and cooperation in scientific research?


James Watson ( 1928 – )


James Dewey Watson was born in Chicago, April 6th, 1928. In 1947, he received a Bachelor of Science degree in Zoology. During the developed a serious desire to learn genetics. This became possible when in 1950, he received a Fellowship for graduate study in Zoology at Indiana University, where he received his Ph.D. degree in Zoology.Watson was greatly influenced both by the geneticists H.J. Muller and T. M. Sonneborn. From September 1950 to September 1951 he spent his first years after graduating in Copenhagen. Part of that year, he was with the biochemist Herman Kalckar, the other part was with the microbiologist Ole Maaløe. Once more, he worked with bacterial viruses, attempting to study the fate of DNA of infecting virus particles. During the spring of 1951, he went with Herman Kalckar to the Zoological Station at Naples, where at a Symposium he met Maurice Wilkins and saw for the first time the X-ray diffraction pattern of crystalline DNA. This encouraged him to change the direction of his research toward the structural chemistry of nucleic acids and proteins. Fortunately, this all became possible when Alexander Luria, a Soviet neuropsychologist and developmental psychologist, in early August 1951, arranged with John Kendrew, an English biochemist and crystallographer, for Watson to work at the Cavendish Laboratory, in October 1951.

Francis Crick ( 1916 – 2004 )


Francis Harry Compton Crick was born in  Northampton, England, June 8th, 1916. He studied physics at University College, London, obtained a Bachelor of Science in 1937. He soon started research for a Ph.D. under Edward Neville da Costa Andrade, an English physicist, poet and writer, but this was interrupted by the start of World War I in 1939. During the war he worked as a scientist for the British Admiralty, working with magnetic and acoustic mines. In 1947, he left the Admiralty to study biology. In 1947, Crick knew no biology and practically no organic chemistry or crystallography, so the next few years he spent learning the elements of these subjects. During this period, together with W. Cochran and V. Vand he worked out the general theory of X-ray diffraction by a helix, and at the same time as L. Pauling and R. B. Corey, suggested that the alpha-keratin pattern was due to alpha-helices coiled round each other. Crick went to Cambridge and worked at the Strangeways Research Laboratory. In 1949, he joined the Medical Research Council Unit lead by M.F Perutz, an Austrian-born British molecular biologist. Crick became a research student for the second time in 1950, when he was accepted into Caius College, Cambridge, and finally obtained his Ph.D. During the years 1953-1954, Crick was on leave of absence at the Protein Structure Project of the Brooklyn Polytechnic in Brooklyn, New York. He has also lectured at Harvard, as a Visiting Professor, on two occasions, and has visited other laboratories in the USA for short periods.

James Watson and Francis Crick soon met and discovered their common interest in solving the DNA structure. Their friendship began in 1951, and soon became a critical influence to each other’s careers. Watson and Crick worked together on studying the structure of DNA (deoxyribonucleic acid), the molecule that contains the hereditary information for cells. Working at Cambridge University, their approach was to make physical models to narrow down the possibilities and eventually create an accurate picture of the molecule. Meanwhile at King’s College in London, Maurice Wilkins and Rosalind Franklin were also studying DNA taking an experimental approach, using X-ray diffraction to study DNA.

Later in 1951, Watson attended a lecture by Franklin on her work. She had found that DNA can exist in 2 forms, depending on the humidity in the surrounding air. This had helped her deduce that the phosphate part of the molecule was on the outside.Watson returned to Cambridge with an obscure memory of the facts she had presented. Based on this information, Watson and Crick made a failed model. It caused the head of their unit to tell them to stop DNA research. Nevertheless, they persisted on their work.

Franklin, working mostly alone, found that her x-ray diffractions showed that the “wet” form of DNA (in the higher humidity) had all the characteristics of a helix. She suspected that all DNA was helical. In January, 1953, Wilkins showed Franklin’s results to Watson, apparently without her knowledge or consent. Watson and Crick took a crucial theoretical step, suggesting the molecule was made of two chains of nucleotides, each in a helix as Franklin had found, but one going up and the other going down. Crick had just learned of Chargaff’s rule (DNA from any cell of all organisms should have a 1:1 ratio base pair rule of pyrimidine and purine bases and, that the amount of guanine is equal to cytosine and the amount of adenine is equal to thymine) in the summer of 1952. He added that to the model, so that matching base pairs interlocked in the middle of the double helix to keep the distance between the chains constant. Watson and Crick showed that each strand of the DNA molecule was a template for the other.

This second effort, in March 1953, resulted in the proposal of the molecular DNA structure: a complementary double-helix configuration. The structure so perfectly fit the experimental data that it was basically immediately accepted. Their model served to explain how DNA replicates and how hereditary information is coded on it. This was a base for the rapid advances in molecular biology that continue to this day.

As a result of their discovery: Watson, Crick and Wilkins shared the Nobel Prize in Medicine in 1962. Franklin had died in 1958 and, despite her key experimental work, the prize could not be received posthumously. Crick and Watson both received numerous other awards and prizes for their work.

The roles of cooperation and competition are very present and essential to scientific research and it’s development.

Competition in scientific research is when scientists are working by themselves to prove a theory, trying to release their correct and final results before another scientist. It allows for scientific research to progress at a faster rate because everyone wants to be the first, allowing for the advancement of science to increment. All scientists want to reach their goal and receive recognition in their field, knowing that they were the first ones to accomplish what once was impossible or undiscovered. Competition influences scientific research by urging the individual to strive for the best, keep focused and give the finest results. If there were no competition, people would not be passionate or have the curiosity and urge to discover new things.

Nevertheless, competition does have its downside since it can cause scientists to create small, undetected errors due to their rush to reach the “finish line” first, making their work be invalidated. Also, labs now have an incredibly high security in order for no one else to get their results or use their work. This leads to scientists behaving in unfair and compromising ways if they decide to steal a colleague’s information or doesn’t credit their work. As a consequence, rather than competition help in the advancement of scientific research it delays it since information isn’t as free-flowing and scientists are achieving success by unfair measures.

Cooperation in scientific research is when a group of people (scientists) work together to prove theory or work on something they all have an interest in. It is important in scientific research, because it helps scientists develop the skill of group work. In a way, it forces scientists to working with one another and cooperate, they have to share their thoughts and information acquired, expanding and spreading the passage of knowledge. This helps scientific progression as people acquire more knowledge when working together and there are many different ideas and perspective to consider. Cooperation is one of the surest ways to achieve a goal the fastest and complete the scientific research, as well as have good results that are very accurate since many people are checking the work and there is more than one perspective.

Nevertheless, within a group of people, the balance of power is not always equal. Some people will not be listened to as much as the others. This means injustice is present when trying to achieve the goal and it might take longer since that person might have the answer but the others are not listening to their input. If there is no trust when cooperating, many conflicts are sure to ensue and if one scientist commits a mistake, the whole group is at fault and everyone will suffer from the consequences.

For scientific research to work efficiently, both competition (creativity and striving for the best) and cooperation (team-work, and multiple ideas) need to be present in order for the research to have a successful result.

The insight, innovation, and persistence of James Watson, Francis Crick, Rosalind Franklin, and Maurice Wilkins helped unlock one of the great secrets of life (a detailed understanding of the structure of DNA). This discovery brought together information that was produced through them working together and using each other’s ideas in order to arrive to their final result: the DNA structure.

They were able to determine the following about nature of science:

  • Science can test hypothesis about things that are too small for us to observe directly
  • Science relies on communication within a diverse scientific community
  • Scientists are expected to give credit where credit is due


How effective is reasoning as a way of knowing? Did emotion or reason help you to identify the logical fallacies of the article: ‘We cannot indulge in this madness?’ Did reasoning alter your opinion about the article? If so, why?


Reason is defined by the Oxford Dictionary as “the power of the mind to think, understand, and form judgments logically”. In other words, it is how we, as human beings, try to make sense of the world using logic, rationality, comparison, judgment and experience. It is what we use to make decisions and most of the time, our reasoning occurs instinctively, almost unconsciously. I believe it is possible to train ourselves to reason consciously, and the more one thinks about the decisions one is making, the more one is able to control over them.

There are many ways of reasoning, but the most common is logic and this is divided easily in deductive and inductive. Deduction leads to specific conclusions while induction is the opposite, and produces a general conclusion from a specific case or cases. The most widely-used form of deductive reasoning is a syllogism. In a syllogism, the first premise is linked to a second, helping us arrive to a conclusion.

An example I found is the following:

Primary premise: All humans are mortal.

Secondary premise: Socrates is human.

Conclusion: Socrates is mortal.”

This is an argument that cannot be refuted because the two premises are true, therefore it is impossible for the conclusion to be false.

Now, a second example I found:

Primary premise: Many IB Diploma students speak a second language

Secondary premise: Gabriel does the IB Diploma

Conclusion: Gabriel speaks a second language”

This is not necessarily true, since the first premise is making a generalised point and the conclusion is not as accurate as one where the premises are specific.

On a day-to-day basis we use inductive reasoning more often than we use deductive reasoning, because we make generalizations on experiences that have occurred to us in the past. Those previous experiences are often just based on what we have seen or felt in the past. Due to all the mentioned above, reasoning is an effective way of knowing only if what we are analysing or concluding is based on something real and specific (logical), or else it is up to our imagination and how we deduce premises through inductive reasoning.

In our last TOK class we had to analyse the following article to find the logical fallacies in them and identify them, giving an explanation as to which logical fallacy the corresponding statement followed.


A logical fallacy is a failure in reasoning that leads to an argument being invalid. If you know what fallacies are, you can both avoid making them when presenting an argument, and be able to identify them when others make them. In the advertising industry, politics, the media, law, fallacies are over used. They are undetected by the unaware audience, which leads them to being tricked and misled. Here is a picture of many logical fallacies that one can learn in order to identify:

Screen Shot 2014-09-30 at 11.18.12 AM

Emotion can be seen as one of the most important factors in decision-making according to some people. Desperation can cause us to take rash decisions that will impact our life later on. Emotions can also persuade us to have a certain inclination (a biased view) when being faced with a question. This can be seen when one is asked would they rather have their family killed or be murdered themselves. In most cases, due to the emotional attachment, love and possible guilt in the future, one answers they would rather die than see their loved ones killed. Therefore, when one strongly opposes to something due to emotion rather than reason our decisions change if they had been taken from a rational point of view.

In this video, we can see the relationship between reason and emotion. It explains how emotions tend to take over in desperate (if we are being attacked by a bear we feel fear and run) or certain situations (if we are being persuaded by making us empathize), but reason as we grow older is always in control:

In my case, since I strongly believe in gay rights, my whole perception of the article “We cannot indulge in this madness” was illogical, making it go against all reason in my opinion and the beliefs I have. Therefore, it was hard for me to precisely identify a logical fallacy, as it made me very upset a person could even possibly think in this way, and deny two people love and I couldn’t see anything mentioned as logical. Nevertheless, I also feel like I can see reason in what the article is saying, since it is being said from a man who strongly believes in what he is saying, making it not be a logical fallacy for him.

A fallacy that emotion helped me identify in regards to gay marriage was the following:

“It would create a society which deliberately chooses to deprive a child of either a mother or a father.”

This is an example of how emotion helped me find this logical fallacy since this argument angers me. The author is making it seem as if allowing same-sex marriage will impact children negatively. It undermines the capability of homosexual men or female to become good parents when there are so many children living in terrible conditions, with unloving and abusive heterosexual parents. In my opinion, there doesn’t need to be a sexual orientation requirement for someone to become a parent, but their capabilities to become one should be based on how much they can provide for their children (love, housing, opportunities, etc.). Therefore, no one should be denied this right, and use children as an excuse.

On the other hand, a logical fallacy that reason helped me identify was this one:

“Disingenuously, the Government has suggested that same-sex marriage wouldn’t be compulsory and churches could choose to opt out. This is staggeringly arrogant.”

This fallacy reason helped me find, because it wasn’t logical and just didn’t make any sense as an argument to go against same-sex marriage. I could not comprehend how giving the choice to churches to “opt out” of same-sex is “arrogant” towards people who are against this type of marriage, since these churches can choose freely, and the Government is only making a suggestion.

In conclusion, reasoning definitely did altered my opinion about the article, where it be emotional or logical, because it shaped how I perceived what was being mentioned. Through reason and emotion I was able to disagree completely on what was being mentioned and help me form a stronger opinion on the matter of same-sex marriage. Nevertheless, it did help me see how in-depth people can disagree with a person’s sexuality and try to define what they can or cannot do.


Do different languages lead to different perceptions of reality (Sapir-Whorf Hypothesis)? Or do we think the same way in any language (Chomsky)? What do you think?


In our last TOK seminar, we discussed one of the ways of knowing: Language. We were then asked a question based off of the Sapir-Whorf hypothesis and Chomsky’s linguistic theory. The Sapir-Whorf hypothesis, created by linguists Edward Sapir and Benjamin Lee Whorf, suggests that our understanding of the world depends to a large extent on the language with which we use to interact with it, in other words, different languages lead to different perceptions of reality, leading cultures to behave distinctly according to what words and phrases they use to label the world. Noam Chomsky, on the other hand, proposes that whatever tongue we speak in, we still perceive of the world in the same way, therefore innate abilities in language learning mean that language is universal.

 Edward Sapir (1884-1939)

ed sapir

An American anthropologist-linguist, who is widely-considered to be one of the most important figures in the early development of the discipline of linguistics.

“The psychology of a language which, in one way or another, is imposed upon one because of factors beyond one’s control, is very different from the psychology of a language which one accepts of one’s free will.”

Benjamin Lee Whorf (1897-1941)


An American linguist known as an advocate for the idea that because of linguistic differences in grammar and usage, speakers of different languages conceptualize and experience the world differently.

“Language is not simply a reporting device for experience but a defining framework for it.”

Sapir-Whorf hypothesis

I highly encourage you to watch this short video titled “Does language shape how we think? Linguistic relativity & linguistic determinism.” as it touches on all points of this hypothesis and visually aids one to understand it.

Noam Chomsky (1928 – )

Chomsky is a linguist, philosopher, and, in his role as political activist. Chomsky’s theories on the extent to which language is innate to humans, and his ‘universal grammar’ theory is incomparable to anyone else.

 “Human language appears to be a unique phenomenon, without significant analogue in the animal world”

In this video, Chomsky discusses the major debates in linguistics:

 Screen Shot 2014-09-17 at 10.44.52 PMI further recommend to read this interview—-.htm to delve into the “Psychology of Language and Thought” Chomsky analyses to become more informed.

When first asked the question whether different languages lead to different perceptions of reality or do we think the same way in any languagemy first instinct to answer this question seemed quite obvious to me, I believed that we think the same no matter what language we are thinking or speaking in. The perception and views of the world would remain constant despite there being different languages. This means that I concord with Chomsky’s views…originally.

Then, upon reflecting on my personal experience I came to realise, this is not necessarily the case. Me, being a tri-lingual person, I feel quite identified with these theories and  hypothesis. Basic ideas, principles and morals do not (shouldn’t) change, we are still the same person in escense, no matter what language we are identified with. What I feel is the concept that differs, is how we express and react to what we think. This is due to the link with culture and historical background of the individual.

In my opinion, culture can define how and what we think. An aboriginal from the Amazons who has been surrounded by nature and tribal culture her/his whole life and speaks only their native dialect, will not think the same way as a European living in Western Culture who speaks multiple languages. Different cultures have a barrier on how the people have a view of life which is expressed through their language.

For example, different words for different colours leads us to form different perceptions of the world. This alters the way different people view the same object, depending on their native language. Also, many words and phrases are native to one language and can’t be translated to many others, so expressing thoughts or ideas will change depending on the language being used. This theory is most effective when one compares the languages of cultures that are very far removed, rather than just comparing the subtle differences between Spanish and Portuguese.

One that has been studied deeply, is the Pirahã people of the Brazilian Amazon. They use three different words for numbers, that translate asapproximately one“, “a little more than one“, and “a lot more than one“. Since their whole perception of groups of objects is based on this, they have serious difficulties counting and distinguishing between patterns of objects once their numbers rise beyond about eight.

This is different from languages such as english, spanish, french, etc. because calculating and numbers are very perspective, making their perception on quantity a lot more precise. An example such as this one proves how different languages prove a different perception of reality.

Nevertheless, this examples shows how a person who only speaks one language has a perception of reality, making their reality different from another group of people. While I believe if the Pirahã people knew English then they would have a more accurate perception of numbers, therefore not necessarily altering their own perception of reality, but making it more precise.

Currently, many linguists accept that there is some difference of perception depending on the language that we use. Certainly there are many untranslatable terms that are key concepts to certain languages and cultures :

Culinary terms are in French

Musical directions are in Italian

Both the Sapir-Whorf hypothesis and Chomsky’s studies can be considered correct. This is because people can only speak from their personal experience and whether they know one or more languages. One person might struggle identifying themselves with a language and finding sense in it, while another can switch back and forth without altering their perception of reality as they are in touch with many cultures and do not feel closely affiliated with any particular one.


In your everyday life, do you yourself use technology to improve how you see, hear, or use other senses?


Beau Lotto, a neuroscientist, and founder of ‘Lotto Lab’ investigates how we perceive the world with our senses and brains. His optical illusions force any viewers to question what they have always taken for granted.

Beau Lotto: Optical illusions show how we see

Screen Shot 2014-09-16 at 4.05.22 PM

Sense perception refers to “any of the faculties of sight, smell, hearing, taste, and touch, by which the body perceives an external stimulus.” In other words, we use these senses to recognise and respond to external stimuli. We use all of our senses constantly to smell, taste, touch, see or hear things in our environment, nevertheless there are some people who lack some senses, and are deprived from that specific type of sensory perception. For example, blind, deaf and mute people. These are some cases that most people are born with, but one can develop degenerative sensory perception. This can be witnessed with hearing and sight loss as one ages, diseases such as ALS that affect touch and muscle movement. Despite these second examples, there are ways to restore those senses and even to enhance them using technology.

I believe we don’t realise that we also use technology on a regular basis to improve our senses, even if we don’t have any sensory imparements. For many people with a defective sense, technology might be a necessity, such as:

A pair of glasses or contact lenses for a person in need of visual aid (the lenses become “part of the body” as it’s forgotten by the user)

A hearing aid for people suffering from hearing loss

A cane for a blind man to use as an extension of his arm, not as a tool but as a part of his body

For others, it can simply be a common object that refines one of a persons’ senses. Essentially, these are tools that assist our defective senses or enhance our regular senses.

Some technological items that help improve my senses that are crucial for me are:


Telephone/Computer: Enables me to keep in contact with friends living the same place I am or even in. Improves visual, auditory and tactile senses in a way. This is because they provide an experience that without telephones and computers wouldn’t be lived and felt.

Audio: I use earphones almost every day to listen to music, watch videos, etc. When they amplify the sound being produced, the earphones are allowing my ears to detect noises that if I wouldn’t be able to hear regularly.

Photography: Cameras visually improve our sense of sight as effects can be used. By using cameras I can zoom in on points my naked eye could not see if it didn’t have this tool. Also, effects to put on images enhance the colour, therefore picking up on detail and making them much more clear. Therefore, photography improves my sight in certain aspects.

Artificial flavourings: When baking, artificial flavourings is normally added. This is not a type of electronic technology, but it is culinary. Food colouring improves brings out the taste in certain foods, improving my sense of taste. This is not technology that alters my taste buds, but it enhances taste that naturally wouldn’t be present.

By using these technological tools, both sense perception and knowledge gained from environment, is enhanced. Every time we receive a new stimulus, we are learning. Every moment of our lives, we  rely on our senses in order to receive information, therefore gaining knowledge.