All posts by ma98bm


Development of some techniques benefits particular human populations more than others. For example, the development of lactose-free milk available in Europe and North America would have greater benefit in Africa/Asia where lactose intolerance is more prevalent. The development of techniques requires financial investment.

Should knowledge be shared when techniques developed in one part of the world are more applicable in another?

I believe that all new knowledge should be shared, especially when can help improve a person’s life and the techniques developed in one part of the world are more applicable in another. This is because to me it is a simple question of morals. When you have the potential to save millions of people, there should be no reason not to allow people to know that they can be cured or don’t have to go through the pain they do and give them the means necessary to stop their suffering. Especially when people know that some people are being aided in their medical condition, but nothing can be done for them in their own country. This is a sad reality that no one should have to face.

Lactose is a type of sugar found in milk and other dairy products. An enzyme called lactase is needed by the body to digest lactose. Lactose intolerance develops when the small intestine does not make enough of this enzyme.

Children in Africa and Asia have serious abdominal bloating, abdominal cramps, diarrhoea, gas and nausea due to their lactose intolerance, yet they are not provided with any lactose-free dairy products (specifically milk for the babies which is their main source of nutrition). By providing these countries with the knowledge of how to prevent the pains and issues experienced by people with this intolerance will be extremely beneficial. There are also many other more severe problems/diseases people are experiencing in other countries, that by sharing knowledge, these men, women and children who are in desperate need, will be able to be cured.


Knowledge if shared could save many lives. The problem is that the techniques implemented require financial investment. Even though it requires financial support to gain this knowledge or put it into practice, this sharing of knowledge should be considered a “donation”, especially since the problem (in this case lactose-intolerance) is much more severe and could cause more harm to those who don’t have the capacity to financially support medical or scientific researches. There are many NGOs and foundations, such as the FAO that are eager and dedicated to defeat hunger by leading international efforts.

There should no excuse to helping out other human beings or the spread of knowledge.

Here is a short video that helps visually explain what lactose intolerance is and its impact on a person.

This is a pdf document by FAO discussing the importance of milk and dairy products in human nutrition


1) Was it ethically correct to ‘fake’ an experiment, and mislead volunteers as to the nature of what was being investigated? Or given the nature of human beings studying human beings, is this the only way to properly carry out such research?

2) Was it ethically correct to put the volunteers under so much stress (many of them were visibly disturbed during the experiment, though a poll conducted later found that 84% of them professed that they were ‘glad’ to have taken part).

3) Can the subject matter be ethically justified – ie, the capacity of human beings to participate in something immoral – or should some things remain untouched by human scientists?

4) What are the ethical implications of the results, and how should we act on them?


According to the Oxford English Dictionary, ethics means:

1 the moral principles governing or influencing conduct.

2 the branch of knowledge concerned with moral principles.

Essentially, ethics means the same as ‘moral philosophy’ or the study of how to live our lives morally


Stanley Milgram experimented how far people would go in obeying an instruction if it involved harming another person. Milgram was interested in how easily ordinary people could be influenced into committing atrocities.

Eg.Germans in WWII.


“Volunteers were recruited for a lab experiment investigating ‘learning’. Participants were 40 males, aged between 20 and 50, whose jobs ranged from unskilled to professional, from the New Haven area. They were paid $4.50 for just turning up.

At the beginning of the experiment they were introduced to another participant, who was actually a confederate of the experimenter. They drew straws to determine their roles – learner or teacher – although this was fixed and the confederate was always the learner. There was also an ‘experimenter’ dressed in a grey lab coat, played by an actor.

Two rooms in the Yale Interaction Laboratory were used – one for the learner (with an electric chair) and another for the teacher and experimenter with an electric shock generator.

The ‘learner’ was strapped to a chair with electrodes. After he has learned a list of word pairs given him to learn, the “teacher” tests him by naming a word and asking the learner to recall its partner/pair from a list of four possible choices.

The teacher is told to administer an electric shock every time the learner makes a mistake, increasing the level of shock each time. There were 30 switches on the shock generator marked from 15 volts (slight shock) to 450 (danger – severe shock).

The learner gave mainly wrong answers (on purpose) and for each of these the teacher gave him an electric shock. When the teacher refused to administer a shock the experimenter was to give a series of orders / prods to ensure they continued. There were 4 prods and if one was not obeyed then the experimenter (Mr. Williams) read out the next prod, and so on.

Prod 1: please continue.

Prod 2: the experiment requires you to continue.

Prod 3: It is absolutely essential that you continue.

Prod 4: you have no other choice but to continue.”

Milgram was cleared of any ethical violations, but the controversy still rages today.



A major problem was the lack of “informed consent”. Informed consent roughly means that the subject is given an accurate description of the the risks involved before he or she consents to participate in the experiment. Milgram’s description of the experiment was deceptive: the subjects believed they were participating in an experiment on learning and memory.

The primary necessity for deception is to insure that the subjects will act naturally in the experiment. If they knew this was a study on obedience, then it might alter their behavior. For example, since we generally do not like to think of ourselves as blindly obedient, we might go out of our way to show the experimenter how independent we “really are”. To complicate things, learning that one would be asked to administer a great deal of pain during the experiment would likely cause a large number of subjects to decline to be part of the experiment. The results of the experiment would only be generalizable to the those who agreed to participate. As a result, it would be easy for us to say that the obedience in the study was simply due to the fact that these were people who liked to inflict pain. And that would miss completely the chilling results of the study.

I do not believe it was ethically incorrect to carry out a fake experiment, due to the fact that there was never a risk of anyone getting hurt. The only way to properly study the behaviour of the participants was to create a fake scenario for them and to keep them in the dark as to the real functioning and objective of the experiment. If the participants had known that they were being tested on their ability to follow orders, they might have felt a need to rebel or go against what they were being told, either consciously or subconsciously. If they had known that the ‘learner’ would not actually get hurt, then they would feel no objection whatsoever to following orders, and the results of the experiment would not be accurately representing general human nature.


Long Term Psychological Harm:

The realization that they could administer such lethal levels of shock to another human being could have long term negative psychological effects on the subjects. What might people think about themselves knowing that they were willing to administer possibly lethal shocks to a helpless learner.

Milgram’s Defense:

In addition to the post-experiment debriefing, Milgram sent each of the “teachers” a written report in which their performance in the experiment was treated in a dignified way.

Subjects also received a surve about their participation in the experiment and gave the following assessment of their participation:

83.7% I am glad/very glad to have been in the experiment
15.1% I am neither sorry nor glad to have been in the experiment
01.3% I am sorry/very sorry to have been in the experiment

Nevertheless, one participant, William Menold, who had just been discharged from a Regimental Combat Team in the U.S. Army and participated in 1961 said, “It was hell in there”, describing how it felt to be in the laboratory during the experiment. He said, “[I was] hysterically laughing, but it was not funny laughter…It was so bizarre. And I mean, I completely lost it, my reasoning power”. He said that he couldn’t believe “that somebody could get [him] to do that stuff”.

Blass says that another subject, Herbert Winer, said that his experience of the experiment was “very difficult to describe…the way [his] feelings changed [about it], and the conflict and tension that arose”, and that his “own heart condition went into an extremely tense and conflicted state”. The most revealing comment that shows the damage Milgram’s experiments had come from Winer when he was debriefed at the end of the experiment. He said he “was angry at having been deceived”. He said he “resented the whole situation” and”was a little embarrassed at not having stopped earlier”

Finally, Milgram reported that one year after the experiment was completed, 40 subjects who a psychiatrist felt would be most likely to have suffered consequences were further evaluated. After examination, the psychiatrist concluded that although extreme stress had been experienced by several of the subjects during the experiment, none were found to having been harmed by their experience.

In the end, in the long-term, I don’t believe there were any ethical implications in putting the volunteers under stress. The entire focus of the experiment was to see if ordinary people would succumb to pressure and follow direct orders, even if it were at the expense of probably another person’s life. For these purposes, it wasn’t ethically incorrect, yet I personally believe it is wrong/inappropriate to do so.


Milgram discovered that human beings are generally capable of performing terrible acts simply due to their unquestioning nature and willingness to follow orders under a figure of authority as well as being under incredible amounts of stress. In my opinion, this can be ethically justified to a certain extent – it is understandable that people will follow orders from their superiors as they are supposed to do, but they should know when they’ve gone too far and they should be able to properly determine when they need to stop, which is when they are hurting themselves or others. Nevertheless, it is important to know what humans are capable of and through this experiment be able to determine what we need to change, or rather improve of our nature.


The ethical implications of the results as previously discussed are presented as the following: psychological harm and deception.

Based on the results, it is shocking to find how easily humans follow orders, even if these could cost someone their life; especially if later they are not affected –glad– to have done this experiment (84% of participants). This is clearly ethically incorrect, as intentionally hurting or killing someone innocent is condemned by mostly all social and religious groups; since the “teachers” of this experiment didn’t know it was fake, essentially they were inducing this pain knowingly upon an innocent person conscious of the risk of killing someone.

The results therefore show that any person could commit crimes under intense pressuring orders. This demonstrates a frightening characteristic of human nature, since any criminal could be excused under these pretences. Therefore, we should work to correct this flaw that has been shown to be detrimental (WWII) and teach everyone from a young age to question everything and become independent.

milgram-shock-box milgram

This video is the first part of three, which gives a detailed recording of what a reenactment of Milgram’s Obedience Experiment was actually like:


In what ways does personal knowledge interact with shared knowledge in religion?

The personal religious knowledge of individuals is drawn from the shared knowledge of their family and surrounding culture: one can predict with some likelihood the religion that will be followed by a child growing up in a Hindu family in India or a child growing up in Christian family in Lesotho.

To what extent would you consider religion to be entirely shared knowledge, with a new generation accepting communal knowledge into their own personal knowledge?

Is the knowledge exchange a one-way communication, possibly involving respect for authority and personal humility?


the belief in and worship of a superhuman controlling power, especially a personal God or gods: ideas about the relationship between science and religion;

a particular system of faith and worship: the world’s great religions;

a pursuit or interest followed with great devotion: consumerism is the new religion

shared vs personal“I know” refers to the possession of knowledge by an individual—personal knowledge.

“We know” refers to knowledge that belongs to a group—shared knowledge.


child heads with symbols

Shared knowledge is highly structured, systematic and the product of more than one individual the contribute to expand this knowledge system.

Shared knowledge changes and evolves over time. These changes might be slow and cumulative. On the other hand, they could also be sudden and dramatic, revolutionary shifts in knowledge.



Personal knowledge depends solely on the experiences of a particular individual. It is gained through experience, practice and personal involvement. It is influenced by an individual’s personal perspective yet at the same time contributes to it.

Personal knowledge is made up of:

  • skills and procedural knowledge that I have acquired through practice and habituation

  • what I have come to know through experience in my life beyond academia

  • what I have learned through my formal education (mainly shared knowledge that has withstood the scrutiny of the methods of validation of the various areas of knowledge)

  • the results of my personal academic research (which may have become shared knowledge because I published it or made it available in some other way to others).

Personal knowledge therefore includes what might be described as skills, practical abilities and individual talents. This type of knowledge refers to knowledge of how to do something.

Compared to shared knowledge, personal knowledge is considered more difficult to express to others.

Like shared knowledge, personal knowledge is not static, but changes and evolves over time. Personal knowledge changes in response to the experiences of an individual.

Religious tolerance illustrationShared knowledge in a religion is the basis of that religion. It is the rules, traditions, ancient prophecies, moral code and cultural/historical background of said religion. This can be seen as it is mostly written down in a holy book (Bible in Christianity, Torah in Judaism and Quran in Islam). The documents and passages published are the shared knowledge of a religion since they were written in the past to serve the purpose of the message of God which should be transmitted to all people on Earth. It is was binds the people sharing the same religion together as they have those same values, stories and principles proposed by their specific religion to guide them as a group rather than individuals. Nevertheless, each person in their own religion has their particular interaction with their god, mystic creature or prophet. This is something unique and personal since the shared knowledge of the religion cannot control how it is interpreted or “adapted” to fit the person’s individual relation to their religion is different. This is mostly due to the experience one has away from his/her spiritual faith, which on most occasions opens their eyes to a world outside of religion.

There are many religions in the world, and of these religions there are many different sects. This shows how shared knowledge can travel across the world, different cultures and continue for generations. In this way, shared knowledge of religion prevails. A person could have personal faith in God, but if their religion is a member of a world faith, its religious knowledge system might continue as shared knowledge.

For example: In Catholic religion: the beliefs, doctrines and practices that make up the tradition are shared knowledge for Catholics worldwide.

Religion is entirely shared knowledge to a certain extent. A massive part of Christian, Jewish and Muslim religion is the communal ceremonies and traditions, which unites the community of believers of the same religion. Therefore, that is the part which is entirely shared knowledge since in Catholicism, church is a service where knowledge about that religion is shared, especially during catechesis.

However, each individual accepts the knowledge he/she believes in. Due to the theories provided by science and the evidence against religion being so substantial, the faith of newer generations is being affected. These generations are adopting a more rational view towards life, despite their family’s religion. The atheists and agnostic community is strengthening in numbers as most people leave their religion, merging their personal knowledge (atheist or agnostic) with the ones they have been taught as children (spiritual beliefs). Having said this, children are more likely to follow their parents’ religion since they are yet too young to form these type of life-style choices, but once they start to question these beliefs upon reaching a specific age, the choice is theirs if they decide to follow a non-religious path or religious path based on the knowledge they have acquired through out the years.

I do believe that in religion the knowledge exchange is a one-way communication, and that is because there is a deity(ies) who is/are almighty and powerful, who should not be questioned. In Christianity and Judaism, this power is channeled through people who are extremely devoted to their faith and have “more knowledge” on the religion than the average human.

eg. priests or rabbi

Apart from having a god who should be respected, there is a man in power guiding the people and spreading the word of God, teaching the faith among the population. The style of teaching in this case is preaching, one enters a church or synagogue, and listens to the priest or synagogue to learn more about their spiritual faith and the message of God. Nevertheless, these speeches are not arguable, a person cannot stand up in the middle and question what is being said. If the views are conflicting that is for the individuals own personal knowledge. That is why, in this sense the knowledge exchange is a one-way communication, especially since the basis for these religions are so ancient that they have turned into unquestionable holy books.


In conclusion, mostly everything we know and do is because of experience. We have learned through our past mistakes what not to do, but with religion, there already is a foundation upon which you can guide your life. Nevertheless, we are still all human and make mistakes, which – depending on an individual’s spiritual faith – religion can help one through the problem with the shared knowledge provided by it or with the personal connection that person has with his/her god or even by themselves with their own personal knowledge.


What is the role of chance in scientific discovery?


“The theory that chromosome behaviour accounts for Mendel’s principles of segregation an independent assortment is known as the Sutton-Boveri chromosome theory of inheritance. Sutton and Boveri were two scientists who worked independently, but it was Sutton who was the first to publish his research. Boveri studied Parascaris equorum, a roundworm with large cells, containing only two pairs of chromosomes.
Historians of science have pointed out that Sutton was aided by the serendipitous use of the research organism, Brachystola magna, a grasshopper. Sutton began his research in Kansas and the great abundance of grasshoppers in that state contributed to its use as a research organism. Brachystola magna had eleven pairs of chromosomes. This made it much easier to distinguish individual chromosomes by their size and shape. Using similar techniques to Boveri, he documented the configuration of chromosomes undergoing meiosis, and made the observation that each chromosome has a well-defined shape that is conserved in each cell generation. This prompted Sutton to proclaim that ‘chromosomes may constitute the physical basis of the Mendelian law of heredity.'”

(BIOLOGY Course Companion, 2014)

Walter Sutton and Theodor Boveri

Sutton-Boveri chromosome theory of inheritance:

chromosome theory

To better understand what the role of chance is, you have to know what it means. This is the definition according to the Oxford dictionary:

ChanceThe occurrence of events in the absence of any obvious intention or cause

A word which is commonly used to describe “luck” in a scientific discovery is serendipity. It is an unexpected event that occurs due to chance.

SerendipityThe occurrence and development of events by chance in a happy or beneficial way

This word was invented by Horace Walpole in 1754. He explained an unexpected discovery he had made in reference to a Persian fairy tale, The Three Princes of Serendip. Walpole stated that the princes were “always making discoveries, by accidents and sagacity, of things which they were not in quest of”

“Scientists are not passive recipients of the unexpected; rather, they actively create the conditions for discovering the unexpected and have a robust mental toolkit that makes discovery possible.”

– Kevin Dunbar and Jonathan Fugelsang

Kevin Dunbar estimates that 30% – 50% scientific discoveries are accidental. Along with Fugelsang, Dunbar suggests that the process of scientific discovery often starts when a researcher finds mistakes in their experiment. These unexpected results lead a researcher to try and fix what they think is an error in their procedure. Eventually, the researcher decides that the “error” is too persistent and systematic to be a coincidence so he or she will begin to think of theoretical explanations for the “error”.

“In the fields of observation, chance only favours the mind which is prepared”

– Louis Pasteur
Portrait of Louis Pasteur

Here, Pasteur was speaking about Oersted, a Danish physicist’s, almost “accidental” way in which he discovered the basic principles of electro-magnetism. He then “by chance” came upon a significant observation:

through a microscope, he observed that healthy fermentation produced round globules and they lengthened as alteration began, becoming very long and slender at the point they became lactic.

This allowed manufacturers to observe the health of fermentation during their manufacturing processes to avoid common failures during fermentation. Pasteur began going down a path that would develop into the science of Microbiology while revolutionizing Chemistry at the same time.

Scientists conduct experiments to attempt to prove a hypothesis. The chance of making an accidental discovery is amplified when no conclusive results are presented. However, it is not during this accidental moment that an actual discovery occurs: the scientist must be able, with prepared mind, to interpret the accidental observation.

Chance and a prepared mind are linked in the process of making scientific discoveries. A “prepared mind” is one that is constantly curious and persistent, always questioning everything. When the result of an experiment is unexpected or unusual, a prepared mind doesn’t simply discard the result and start the experiment again, but ponders and investigates the unusual observation and why the result turned out the way it did.


Serendipity is a common occurrence throughout the history of scientific discovery such as Alexander Fleming’s accidental discovery of penicillin in 1928.


Probably the most famous serendipitous event reported in science.

Fleming’s discovery of penicillin began when he was investigating a group of petri dishes on his workbench. They contained colonies of a bacterium called Staphylococcus which Fleming had deliberately placed in the dishes. He found that one of the dishes had become contaminated by a mold, and he noticed that there was a clear area around the mold.

penicillin mold

Instead of cleaning or discarding the petri dish and ruling out the contamination as a “mistake”, he decided to investigate why the clear area had appeared. Eventually, he discovered that the mold, a species of Penicillium, was making an antibiotic that killed the bacteria around it. Fleming named the antibiotic penicillin. It soon became an extremely important medicine for fighting infections.

NPG x136472; Alexander Fleming by Howard Coster

In conclusion, when you are doing research for discoveries, in actuality you are looking for the unexpected. Research is mostly considered successful when something occurs that you didn’t think was or could happen. Chance/serendipity is usually defined as an accidental discovery, yet that all depends on how an individual understands accidents to validate their opinion on chance. Persistence is key in scientific discovery to make the serendipitous “accident” valid since re-testing to achieve answers is essential. Normally, serendipity is the result of a stimulus, not just something that happens in the moment, this provides an incentive to explore science more deeply and allows scientists to go in another direction for their original goal widening their knowledge so that the rest of the world can learn and benefit.

Here is an entertaining video showing “10 accidental inventions” you never expected:


To what extent is determining gender for sporting competition a scientific question?


 “Gender testing was introduced at the 1968 Olympic games to address concerns that women with ambiguous physiological genders would have an unfair advantage. This has proven to be problematic for a number of reasons. The chromosomal standard is problematic as non-disjunction can lead to situations where an individual might technically be male, but not define herself that way. People with two X and Y can develop hormonally as a female.
The practice of gender testing was discontinued in 1996 in part because of human rights issues including the right to self-expression and the right to identify one’s own gender. Rather than being a scientific question, it is more fairly a social question.”

(BIOLOGY Course Companion, 2014)

The possibility that men might pose as women and be unfair competitors in women’s sports is an outrageous concept to both the athletes and the public. Since the 1930s, media reports have fuelled claims that individuals who once competed as female athletes were in actuality men.

At the Rome Olympic Games in 1960, the International Amateur Athletics Federation (IAAF) began establishing rules for women athlete’s eligibility. Initially, physical examination was used as a method for gender verification, but this plan was widely disliked. This led to sex chromatin testing (buccal smear) being introduced at the Mexico City Olympic Games in 1968.

The principle was that genetic females (46, XX) show a single X-chromatic mass, whereas males (46, XY) do not.

Unfortunately, sex chromatin analysis fell out of use by geneticists after the International Olympic Committee (IOC) began implementing gender verification. The lack of laboratories performing the test heightened the problem of errors in interpretation by inexperienced workers, yielding false-positive and false-negative results.

However, an even greater problem is that there exists phenotypic females with male sex chromatin patterns (e.g. androgen insensitivity, XY gonadal dysgenesis). These individuals have NO athletic advantage as a result of their genetic abnormality and should not be excluded from competition.

Only the chromosomal (genetic) sex is analysed by sex chromatin testing, not the anatomical or psychosocial status.

For all the above reasons sex chromatin testing unfairly excludes many athletes.

These tests fail to address the fundamental injustices of laboratory based gender verification tests. The IAAF considered the issue in 1991 and 1992, and concluded that gender verification testing wasn’t needed. This was thought to be especially true because of the current use of urine testing. Males masquerading as females in these circumstances are extremely unlikely. Screening for gender is no longer undertaken at IAAF competitions.

muscle system
Anatomical differences between female body and male body

table men vs women
men vs women

The testing is humiliating, socially insensitive, and not entirely accurate or effective, causing it to some under scrutiny for those who have acknowledged these facts. It is especially difficult and problematic in the case of people who could be considered intersexual. Genetic differences can allow a person to have a male genetic make-up and female anatomy or body chemistry.

A resolution was passed at the 1996 International Olympic Committee (IOC) World Conference on Women and Health “to discontinue the current process of gender verification during the Olympic Games.” The International Olympic Committee’s board voted to discontinue the practice in June 1999. In individual cases the IOC stills holds on to the right to test on gender.

Newer rules permit transsexual athletes to compete in the Olympics after having completed sex reassignment surgery, being legally recognized as a member of the sex they wish to compete as, and having undergone two years of hormonal therapy (unless they transitioned before puberty).

The International Association of Athletics Federations ceased sex screening for all athletes in 1992, but retains the option of assessing the sex of a participant should suspicions arise. This was invoked most recently in August 2009 with the mandated testing of South African athlete Caster Semenya. This is a sad an infuriating test that was done, insulting Semenya’s talent as an athlete.


Here is a link that explains in full detail the injustice a Spanish hurdler, María José Martínez Patiño, faced when she failed the gender test.

In conclusion, gender verification for sporting competition should be considered only a scientific question if the gender of all the competitors are going to be determined in order for the competition to be fair since anatomically, males are stronger and have more endurance than most females. Nevertheless, if a person identifies themselves as a female and has the body of one then they should be allowed to compete in the female category. Having a genetic disorder that determines you as a male doesn’t justify you to be excluded from the female category since in every other aspect you are one. If this is the reason for why one should be excluded from competing as a female, then every person with a genetic disorder that gives them an advantage should also be disqualified if it is “fairness” that is in question (eg. Eero Mäntyranta, a gold medalist cross-country skier with a genetic disorder that resulted in an increase of up to 50% of his oxygen carrying capacity).

It is unfair that they view the need to do gender testing just based on their physical appearance and that an “underdog” beat out the competition. Sports are extremely competitive, but it should bring unity, not discrimination.

Here is an amazing video that explains the unfairness of using gender verification/testing to prohibit athletes, especially females, for participating in competitions they have rightly trained for.


For our first Theory of Knowledge presentation, our class was split into a groups and each person was assigned a Way of Knowing to analyze and be able to present as an iMovie. The way of knowing that was assigned to us was Language.

Here is the end product, I hope get a new prospective on the matter and an insight into our thoughts as students on Language as a WOK. Enjoy! :

An interesting anecdote of makings of this video is of how we came up with our introduction. Since we go to an international school, there are many people who speak a wide variety of languages. Some of us don't even know how to say hello in that specific language, and that is how the concept of introducing our video with each person who speaks a different language saying "hello" developed. Doing this teaches the viewer how to say hello in different languages, exposing them to a small fraction of each persons' culture while also demonstrating that in each language there is a common salutation.


Did Mendel alter his results for publication?


Gregor Mendel, an austrian monk in his time (1824-1884), is regarded as the father of genetics. He paved the way of genetics for modern day.

In the following link there is a clear and helpful animation explaining in detail Mendel’s experiment on pea plants to research inheritance:

Mendel was able to achieve the following results:

Parental plants

Hybrid plants Offspring from self-pollinating the hybrids


Tall stem x dwarf stem

All tall

787 tall : 277 dwarf

2.84 : 1

Round seed x wrinkled seed

All round 5474 round : 1850 wrinkled 2.96 : 1
Yellow cotyledons x green cotyledons All yellow

6022 yellow : 2001 green

3.01 : 1

Purple flowers x white flowers

All purple 705 purple : 224 white

3.15 : 1

Full pods x constricted pods

All full 882 full : 299 constricted

2.95 : 1

Green unripe pods x yellow unripe pods

All green 428 green : 153 yellow 2.82 : 1
Flowers along stem x flowers at stem tip All along stem

651 along stem : 207 tip

3.14 : 1


From this he was able to determine 3 very useful laws that still apply to the information we have on genetics today:

Law Definition
Law of Segregation During gamete formation, the alleles for each gene segregate from each other so that each gamete carries only one allele for each gene.
Law of Independent Assortment Genes for different traits can segregate independently during the formation of gametes.
Law of Dominance Some alleles are dominant while others are recessive; an organism with at least one dominant allele will display the effect of the dominant allele.

“In 1936, the English statistician R.A. Fisher published an analysis of Mendel’s data. His conclusion was that ‘the data of most, if not all, of the experiments have been falsified so as to agree closely with Mendel’s expectations.’ Doubts still persist about Mendel’s data -a recent estimate put the chance of getting seven ratios as close to 3:1 as Mendel’s at 1 in 33,000.”

To get ratios as close to 3:1 as Mendel’s would have required a “miracle of chance”.
What are the possible explanations apart from a “miracle of chance”?

Mendel counted 705 purple-flowered plants and 224 white-flowered plants, he then realized that the ratio of 705:224 is almost equivalent to a 3:1 ratio, probably inspiring to achieve this result in all his other tests. Another explanation apart from “a miracle of a chance” to get Mendel’s perfect results could be that he used the only the accurate results of his tests or at least the ones that were nearer to the 3:1 ratio. Lastly, Mendel could have possibly modified the data in order to make the perfect results for his experiment.

pea experiment

Many distinguished scientists, including Louis Pasteur, are known to have discarded results when they did not fit a theory. 
Is it acceptable to do this?


It is not acceptable to discard results when they do not fit the theory since it affects the legitimacy and reliability of the experiment.

In our IB Biology class sometimes when creating a lab, our  hypothesis is not correct and the results permit us to see in how and why it is wrong. If the results are discarded to achieve perfect results that prove the hypothesis, then there are many consequences to our advancement in knowledge as we will believe something that is not true. Nevertheless, sometimes results are not well corrects due to problems with the calculations of the results. Still, this does not mean  it is acceptable to discard the results, they should still be included with a justification as to why it is incorrect and the right answer accompanying it.

The scientific community is not striving for false perfection, instead answers are necessary even if they are “wrong”, they can inspire other scientists to solve the problem or it can help the scientist see where she or he went wrong. It is important that no theories are wrongly accepted.

How can we distinguish between results that are due to an error and result that falsify a theory?

oops falsification

Results that are due to an error can be identified with our knowledge if the result doesn’t make sense with the information we already know or if the results are not coherent between them (anomalous data).

On the other hand, results that falsify a theory can be found out when data is  accurate on every trial, showing that the results have been tampered with. 

That is why one should always repeat the experiment multiple times to find out if or demonstrate that the results are accurately portrayed. Also, comparing statistics and formulas in order to distinguish errors from falsifying data is essential to differentiate them both.

What standard do you use as a student in rejecting anomalous data?

anomalous dat

I do not believe any student should reject anomalous data as it occurred for a reason and it will help us determine the mistakes made in the procedure or it there are any unknown experimental/biological errors. It is vital to analyse why there is anomalous data present and include it in the final results while trying to prove the theory, explaining why such data is presented and whether it disproves a hypothesis. Everyone grows from mistakes and this data helps scientists learn from theirs and expand their minds.