Myth of the perfect soil: Quick, general principles of fertile soils

I was once asked by a gardening enthusiast why the perfect soil could not be “manufactured”; that is, one concocted or formulated in such a way that is perfectly suitable for all plants – to which I replied: such type of soil cannot exist because different plants have different nutrient demand. In other words, different plants eat differently, so to speak. In the same way that there is no one type of animal feed suitable for all animals, a one-size-fits-all type of soil is simply impossible.

But if a perfect soil cannot be formulated, my gardening enthusiast friend continued, then why not develop instead a specific soil perfectly suited for a certain plant? For instance, one could formulate a soil perfect for mango trees, another for roses, yet another for chilies, and so on.

Yes, why not indeed. In fact, some stores are already selling bags of potting soil or mix formulated specifically for certain plants such as tomatoes, cactus, vegetables, and flowers. So, are these formulated plant growing media perfect?

I don’t quite remember what my exact reply was, but I do remember not being quite satisfied with my answer. It was a question that cannot be answered in just short few sentences or in a hurry.

A fertile soil is essentially one that is able to meet the plant’s nutrient and water demand, as well as physically able to support the plant. In other words, developing a fertile soil is not a case of simply packing the required plant nutrients and in their sufficient quantities into the soil. Instead, soil fertility is governed by a myriad of biological, chemical, and physical factors that interact with one another in a complex matter to affect the final outcome of soil fertility.

Soil fertility is determined by a myriad of factors that interact with one another. A perfectly fertile soil requires each of these factors to be simultaneously attained. (c) andreusK @ fotolia.com.

So, if we want to create a perfect soil, even if for one specific target plant, we need to simultaneously attain or meet all the criteria that ultimately create a fertile soil. But in practice, it is very difficult, if not impossible, to simultaneously achieve all these criteria at once.

Urban gardeners often ignore soil structure. A soil must be strong enough to securely support the plant, yet weak enough to allow us to work it: to till the soil to improve soil aeration, for instance. A soil must be strong enough to resist erosion particularly by water but yet sufficiently weak enough to allow water to penetrate and wet the soil and allow the plant roots to expand freely within the soil in search of more water, nutrients, and anchorage.

A very coarse-textured soil like sandy soils (like those at the beaches) are structurally too weak to support large or tall trees. Sandy soils also suffer from lack of inherent plant nutrients because these soils are unable to hold onto the nutrients. Sandy soils are also very porous. They receive water very easily (which is good), but they also lose it very easily too (which is bad). “Easy come, easy go,” is the unfortunate story of sandy soils when it comes to water and nutrients.

On the other extreme, very fine-textured soils like clayey soils (like those we can wet and mold into shape, like clay pottery) might be strong enough to physically support large trees, but they also tend to suffocate the plant roots. Clayey soils are very much less porous, so they tend to be easily inundated with water (i.e., flooded); thus, more they are more prone to erosion and plants grown in such soils experience greater risk of plant root rot or decay. But clayey soils do have one advantage over sandy soils: clayey soils hold onto soil nutrients stronger than by sandy soils, so more nutrients would be available to the plants planted in clayey than sandy soils.

What we need then is a soil that is between these two extremities: one that is not too sandy and not too clayey. Generally, a good, fertile soil is one with equal parts of sand and clay (50:50 % by weight). But even better is one with more sand than clay, about 60:40 to no more than 70:30 % sand and clay proportion. Having such proportions provide the best of both worlds: the sand part in the soil provides good drainage and aeration, whereas the clay part provides strong nutrient retention, and at this sand and clay combination provides strong physical support and environment for a growing plant.

Potting mix, commonly sold at stores, are very popular because they are light, easy to work with, and formulated to be in rich in nutrients for plants in general. Potting mix rarely contain any soil, so while their absence makes potting mix very light, they cannot support large or tall plants. Potting mix are also very porous, so they risk drying out very quickly and can experience large losses in nutrients via the downward drainage of water (caused by over-watering, for instance). One way to remedy potting mix’s very rapid drainage and weak structural support for plants is to mix some fertile soil in equal parts with the potting mix.

But, unless we are dealing with soils in a few potted plants, it is difficult, costly, and impractical to alter a soil’s texture; that is, to change the soil’s proportion of sand and clay because this would involve bringing in vast quantities of soil from an external source (or buying way too many bags of potting mix) and mixing them with the soil in our garden.

Another important factor that strongly affects soil fertility is soil acidity, which is measured in the unit pH (recall that pH ranges between 1 to 14, where 1 is strongly acidic, 7 neutral, and 14 strongly alkaline). Malaysian soils are unfortunately very acidic in nature and often range between pH 2 to 4. In this acidic range, vital soil nutrients for plants like nitrogen (N), phosphorous (P), potassium (K), calcium (Ca), and magnesium (Mg) are very much less available for plants (Fig. 1). That they are less available does not mean they are in low quantities or absent in the soil. Instead, these nutrients tend to be in chemical forms that cannot be directly used or taken up by the plants.

Fig. 1. Nutrient availability depends on soil pH (image from Kgopa, P., Moshia, M. & Shaker, P. (2014). Soil pH management across spatially variable soils, 4: 203-218.).

Even worse is low soil pH also encourages elements like aluminum (Al), iron (Fe), and manganese (Mn) to be more available to plants, and these elements are toxic to plants in excess amounts. In other words, Malaysian soils are inherently not fertile due to their low soil pH. In fact, if I were to grade our Malaysian soils, where grade A means very fertile soils and grade F means toxic soils, our Malaysian soils in general score a grade C – not the worst but not great either.

Ideally, soil pH should be between 5.5 to 6.5 (near neutral) because, at this range, most nutrients are in forms available for direct plant uptake. How then to raise the soil pH? One common method is to add lime (calcium hydroxide or calcium oxide) to the soil. Generally, for Malaysian soils, about 600 g of lime is needed for every 1 square meter of soil area (or roughly, 40 g per medium-sized pot).

As stated earlier, essential plant nutrients are N, P, K, Ca, and Mg, so one common question I am asked is what are the quantities of nutrients should fertile soils have? Their approximate quantities have been worked out, like shown in Table 1-5, but we must remember these approximates are exactly just that: general guidelines. This is because, as stated earlier, different plants have different nutrient requirements. Moreover, plant nutrient requirement would change as the plant ages or progresses into different life phases or stages, so the nutrient requirement of a seedling would be different than a plant that has matured or a plant that has started to the next life phase: flowering or fruiting.

Table 1. Nitrogen (N) levels in soil
N (% by weight)Description
<0.05Very low
0.05-0.15Low
0.15-0.25Medium
0.25-0.50High
>0.50Very high

Table 2. Phosphorous (P) levels in soil
P (mg P / kg soil)Description
<5Very low
5-10Low
10-17Moderate
17-25High
>25Very high

Table 3. Potassium (K) levels in soil
K (cmol / kg soil)Description
<0.2Low
>0.6High

Table 4. Calcium (Ca) levels in soil
Ca (cmol / kg soil)Description
<4Low
>10High

Table 5. Magnesium (Mg) levels in soil
Mg (cmol / kg soil)Description
<0.5Low
>4High

To complicate things even further, one nutrient can affect the plant uptake of another, either positively (synergistic) or negatively (antagonistic) (Fig. 2). Having an excess of N (a very important nutrient that is often applied in large quantities), for instance, can suppress the uptake of K and the micronutrients boron (Bo), and copper (Cu). Too much of Ca would instead reduce the uptake of a myriad of nutrients, particularly Mg. This phenomenon of nutrient antagonism complicates fertilizer formulation and recommendation because it risks triggering nutrient uptake suppression when one nutrient is oversupplied or appear in excess amounts. In other words, it is not just the quantity of nutrients in the soil that is important, but their amounts relative to one another.

Fig. 2. Nutrient interactions mean the presence of one nutrient can affect the plant uptake of other nutrients in a synergistic or antagonistic manner (image from nutriag.com/article/mulderschart).

This phenomenon of nutrient antagonism is also why we should never over-fertilize our plants partly because it risks oversupplying one or more nutrients which in turn suppress the plant uptake of others. In my experience in helping urban gardeners, the problem of over-fertilization is very common, where gardeners tend to experiment with using various fertilizers too frequently or using excessive quantities. They are also sometimes too impatient when their plants appear not to respond to a certain type of fertilizer, so they then try another fertilizer, and another, and so on – risking oversupply of nutrients and nutrient toxicity.

One of the most effective ways to improve soil fertility is to add organic matter such as from composts, waste materials from our gardens (wood chips, lawn cuttings, or dried leaves and twigs), and certain kinds of food scraps. Organic matter is often regarded to be the lifeblood of soils – with good reasons. Its addition to soils improves a multitude of soil properties related to soil fertility. Addition of organic matter makes the soils less acidic (recall Malaysian soils are generally very acidic), provides plant nutrients in a gradual manner (like a slow release fertilizer) as the organic matter decomposes in the soils, increases soil aeration and drainage, increases the availability of both water and nutrients in the soil, and increases the soil’s resistance to erosion.

Building up organic matter in our soils, however, require regular applications. This is analogous to going to a gym to build up or maintain our muscles. Start skipping some gym exercises and our muscles start to shrivel. In the same way, regular applications of organic matter are required to increase or maintain the level of soil organic matter. Applying organic matter in lesser quantities or irregularly and the organic matter level in the soil will gradually decline over time.

But the unfortunate truth is large portions of the organic matter are lost when applied to the soil. This is particularly true for Malaysian soils. Our tropical climate’s large rainfall amounts and high air temperatures lead to rapid organic matter decomposition and large organic matter losses by erosion.

There is also a limit to which we can increase the organic matter amount in our soils, even with regular applications. Organic matter levels rise rather rapidly in the first 6 months after application, then gradually slow down and subsequently stagnate at some level, typically no more than about 5% of the total soil weight. Applying large quantities of organic matter may not necessarily lead to high soil organic matter levels and at times actually be counterproductive because applying too much organic matter in our soils can lead to anerobic (oxygen-starved) conditions which leads to poorer, not better plant growth.

How much organic matter can we safely apply then? Generally, no more than 8 kg per one square meter ground area (or roughly, no more than 600 g per medium-sized pot) should be applied at any one time. Re-application should only be done when the organic matter has nearly all decomposed (or no longer visible on the soil surface).

Lastly, watering. Lack of water has a more detrimental effect to plant growth than the lack of nutrients. Even a soil rich in nutrients is of no use to the plant if there is inadequate water. This is because nutrients in the soil or fertilizers must be in solute forms before they can be taken up by the plant roots. The ability of a soil to retain water is therefore crucial. All soils will eventually dry out over time, some faster than others, depending on soil texture, as discussed earlier. Soils lose water via evaporation and drainage, and we, therefore, need to supply enough water to replace the amount of water lost from the soil. Supplying too much water is equally as bad as supplying an insufficient amount of it.

Urban gardeners I have met tend to overwater their plants, but their plants tend not to experience any detrimental effects. This is probably because the soils (particularly, if the gardeners are using potting mix) have good drainage where the excess water is allowed to drain out; thus, avoiding flooding or soil saturation. The problem can arise when the soils have poor drainage or simply too much water is applied in a short time. Soils that are saturated all the time risk causing plant root rot, among others.

Like for nutrients, different plants have unique water requirements. But as a general guideline, plants require about 5 L of water per one square meter ground area (or roughly, 350 mL of water per medium-sized pot) per day. For very hot, dry days, the amount of water to apply can be increased by two times.

The perfect soil may not exist or can be developed. We often have to make do with the soil in our gardens. But the good news is that even problematic soils can be significantly improved through proper management. Applying organic matter to our soils is one very effective way to increase soil fertility, but even then watering and fertilizer applications still need to properly managed. Most of all, it requires our patience. Improving soil fertility takes time. Do not be impatient, and avoid the common mistake of experimenting with too many fertilizers too quickly when results are not forthcoming.




Is watering our houseplants with washed rice water really that effective? Here’s the scientific evidence

Our friends, our neighbors, even strangers we meet swear by it. They claim watering our household plants with water from our washed rice is effective, as good as or even better than using fertilizers. My neighbor, for instance, says her house orchids have never failed to bloom because she feeds her plants with that one “special ingredient”: the water from her washed rice.

But where is the scientific evidence that washed rice water is effective?

Surprisingly, there has been no research done on the effectiveness of using water from washed rice specifically on the growth of any plant. Most studies have been about the potential use of washed rice water as a beauty product or about the loss of human nutrients when rice is washed. Studies such as by Malakar and Banarjee (1959) and those reviewed by Juliano (1985, 1993) have reported that washing rice can cause up to half of the water-soluble vitamins and minerals to be lost from the rice.

The exact amount of these nutrient losses would depend on the type of rice, how much water was used in washing of the rice, and how rigorous was the washing done. But generally, washing rice causes rice to lose up to 7% protein, 30% crude fiber, 15% free amino acids, 25% calcium (Ca), 47% total phosphorus (P), 47% iron (Fe), 11% zinc (Zn), 41% potassium (K), 59% thiamine, 26% riboflavin, and 60% niacin.

But what was lost from the rice is now gained by the water. Perhaps these leached nutrients now in the washed rice water could be beneficial to our houseplants.

Let’s find out. I asked one of my final year agriculture students to conduct such an experiment to answer this burning question “once and for all”.

Methodology

Water spinach (Ipomoea reptans), or more widely known as kangkung, was used a test crop. Kangkung was planted in 150-mm wide and 200-mm tall polybags, so each polybag had only one plant. Each polybag was filled with 9 kg of soil (Bungor soil series, which has a rather coarse texture, about 50-60% sand and 20-40% clay).

The treatments were: 1) washed rice water (RIC), 2) NPK 15:15:15 fertilizer (NPK), and 3) control (CON).

The RIC treatment meant that the kangkung plant in each polybag was watered daily with 200 ml of water from washed rice, whereas the NPK treatment was where 5 g of NPK 15:15:15 fertilizer was applied per polybag once (before planting) onto the soil, and the kangkung plants in this treatment were watered daily with 200 ml of tap water per polybag. The CON treatment is the control, where the kangkung plants were only watered daily with 200 ml of tap water per polybag, without any application of fertilizer or washed rice water. Each treatment had five replications.

The RIC and NPK treatments would determine whether washed rice water is as good as or more effective than applying fertilizer in increasing plant growth. The CON treatment is the baseline upon which the kangkung growth in the RIC and NPK treatments will be compared when kangkung is grown without any fertilizer or washed rice water applications.

In this experiment, my student always used the same white rice, and the rice to water ratio was 1.0 : 1.5 L (in other words, for every 1 L of white rice, she used 1.5 L of water to wash the rice). The washing of rice was always maintained in the same way.

The experiment continued for five weeks, after which several plant growth parameters (leaf number, plant height, fresh and dry plant weight, leaf area, and specific leaf area or leaf thickness) and plant nutrient content (N, K, Ca, and Mg) were measured. Additionally, soil properties such as pH, K, Ca, and Mg were measured. Unfortunately, due to faulty equipment, plant P, soil N, and soil P content could not be measured.

Results

Statistical analysis revealed that of all the plant growth parameters measured, only the number of leaves and fresh plant weight (Fig. 1 and 2) were significantly affected (p <0.10) by the treatments. Fig. 1 and 2 show that there was a 90% chance that kangkung grown in both RIC (washed rice water) and NPK (fertilizer) treatments were equal with each other and both of them being higher than that in the control (CON) treatment in terms of their number of leaves produced and fresh plant weight.

Fig. 1. Mean (± standard error) of the number of leaves for the three treatments. Means with same letter are not significantly different from one another (p>0.10) according to SNK test.

Fig. 2. Mean (± standard error) of the fresh (wet) plant weight for the three treatments. Means with same letter are not significantly different from one another (p>0.10) according to SNK test.

On average, kangkung grown in both RIC and NPK treatments had 26% more leaves and were 59% heavier than the kangkung grown in the CON treatment.

The better plant growth in the RIC and NPK treatments were due to additional supply of N and K by the washed rice water and NPK fertilizer. This was reflected in the higher N and K content in the plant and soil in the RIC and NPK treatments (Fig. 3 to 5). Fig. 6, for instance, shows that washed rice water had about twice the amount of K than tap water.

Fig. 3. Mean (± standard error) of the plant N for the three treatments. Means with same letter are not significantly different from one another (p>0.05) according to SNK test.

Fig. 4. Mean (± standard error) of plant K for the three treatments. Means with same letter are not significantly different from one another (p>0.05) according to SNK test.

Fig. 5. Mean (± standard error) of the soil K for the three treatments. Means with same letter are not significantly different from one another (p>0.05) according to SNK test.

Kangkung has a high demand for N and even higher for K nutrient (Susila et al., 2012). Consequently, the supply of additional N and K nutrients from either washed rice water or fertilizer would be beneficial to kangkung and result in better plant growth such as producing more leaves and heavier plant biomass, as observed in this study.

Fig. 6. Mean (± standard error) of the K, Ca, and Mg content in the washed rice water and tap water.

Fig. 7 and 8 show an interesting trend, that in the RIC treatment, soil Ca was the highest but plant Mg was the lowest. This is because Ca and Mg are antagonistic with each other: high Ca content would suppress the plant intake of Mg. Fig. 6 shows that, compared to tap water, washed rice water had four times more and nearly six times less Ca and Mg, respectively.

Fig. 7. Mean (± standard error) of the soil Ca for the three treatments. Means with same letter are not significantly different from one another (p>0.05) according to SNK test.

Fig. 8. Mean (± standard error) of the plant Mg for the three treatments. Means with same letter are not significantly different from one another (p>0.05) according to SNK test.

Lastly, soils in RIC treatment showed higher pH (less acidic) by about 19% than the soils in the NPK and CON treatments (Fig. 9). The consequence of this pH increase is minimal because soil pH in the RIC treatment still remained rather low, below 5. But perhaps over a longer run with regular additional watering with washed rice water, soil pH could further increase, making more soil nutrients available to the plant as the soil becomes increasingly less acidic over time.

Fig. 9. Mean (± standard error) of the soil pH for the three treatments. Means with same letter are not significantly different from one another (p>0.05) according to SNK test.

So, what do all of these results mean?

Results showed that using water from washed rice is as effective as NPK fertilizer in promoting plant growth, at least in terms of the number of plant leaves produced and the higher plant biomass (fresh).

The implication from this study means washed rice water can replace NPK fertilizer. This study adds credence that, rather than discarding the water after we wash our rice, we can recycle or reuse the water by watering our houseplants with it, and this water is generally as effective as applying NPK fertilizer; thus, we save on fertilizer and energy use and money.

The level of confidence in this study for the plant growth parameters was 90%, not the usual 95% or 99% in most scientific studies. But perhaps with a larger sample size, these results would be statistically significant at a higher level or more plant growth parameters would be found to be statistically significant from using washed rice water.

Nonetheless, the belief that higher plant growth can be encouraged by using washed rice water is supported by the findings of higher N and K content in the plant (as well as in the soil for K). Their level of significance was 95%. Washed rice water do supply the essential nutrients of N and K, which are very much needed by the kangkung plant. With the additional supply of N and K nutrients, it can be expected that kangkung as well as other plants would respond favorably by having increased plant growth and yield.

Potential problems of using washed rice water

Admittedly, using water from washed rice will always be for domestic, household use. Using such enriched water for large-scale or commercial farming production systems would be impractical as it would require too much washing of rice! Nonetheless, domestic use of washed rice water, as stated earlier, is a good way to recycle water in the household rather than just discarding it down the drain.

Reusing water from washed rice can be a part of household campaign to save energy and water and to reduce wastages. (c) Stockgiu @ fotolia.com

The second potential problem is the washed rice water will have to used almost immediately. Leaving the water out in the open would encourage fermentation and create unwanted sour-like smell, though it would interesting to compare between fermented and unfermented rice water on our houseplants.

The third potential problem is whether prolonged use of washed rice water on our plants would encourage the incidence and spread of pests (like rodents) and diseases. This kangkung experiment was only carried out over a period of five weeks, too short to see any potential incidence of pests and diseases.

At the end, I am encouraged by the results of this study – the first perhaps to study in a more scientific rigorous manner if using washed rice water is really that effective in promoting plant growth. This should be a start of more experiments: testing on more plant/crop types (such as fruit or flower plants) and the inclusion of more plant growth and soil parameters.

I like to thank my student, Syuhaibah, for her hard work in this experiment.

References

  1. Juliano, B. O. (1985). Rice: chemistry and technology, 2nd ed. St Paul, MN: American Association of Cereal Chemistry.
  2. Juliano, B. O. (1993). Rice in human nutrition. Rome: International Rice Research Institute Food and Agriculture Organization of the United Nations.
  3. Malakar, M. C., & Banerjee, S. N. (1959). Effect of cooking rice with different volumes of water on the loss of nutrients and on digestibility of rice in vitro. Journal of Food Science, 24, 751-756.
  4. Susila, A. D., Prasetyo, T., & Palada, M. C. (2012). Optimum fertilizer rate for kangkong (Ipomoea reptans L.) production in Ultisols Nanggung. In A. D. Susila, B. S. Purwoko, J. M. Roshetko, M. C. Palada, J. Kartika, L. Dahlia, K. Wijay, A. Rahmanulloh, M. Raimadoya, T. Koesoemaningtyas, H. Puspitawati, T. Prasetyo, S. Budidarsono, I. Kurniawan, M. Reyes, W. Suthumchai, K. Kunta & S. Sombatpanit (Eds.), Vegetable-agroforestry systems in Indonesia. Special Publication No. 6c. (pp. 101-112). Bangkok: World Association of Soil and Water Conservation (WASWAC) and World Agroforestry Center (ICRAF).



Engineering the climate: Ridiculous movie Geostorm has some important questions for us

Imagine a future where we are able to control the climate by using some very sophisticated technology involving an array of satellites orbiting Earth. With these satellites we can control the weather, overcoming detrimental climate change. Storms, hurricanes, and harsh winters are all but a distant memory of the “bad old days of not knowing any better”. Such is the premise of the recent 2017 Hollywood movie Geostorm.

The 2017 Hollywood movie Geostorm is ridiculous but raises interesting questions especially relevant today: the role, risks, and effectiveness of geoengineering in climate change mitigation.

But sadly, instead of science, what we get from Geostorm is an apocalypse porn. The movie over-indulgences style over substance, obsessing on the pandemonium special effects rather than on the science. This is a shame and missed opportunity because Geostorm addresses some very pertinent issues today: climate change and the role of geoengineering.

Geoengineering is the large scale and deliberate modification of Earth’s climate, primarily to mitigate climate change. But geoengineering is controversial because it is dangerous. And it is dangerous because we cannot predict its outcome or fully control it. Climate science is complex. Climate is the net outcome of many factors that interact with one another in a nonlinear manner. Alter one or more climate factors, and the whole climate system goes out of sync in a manner that can be difficult to predict. Moreover, the effects of geoengineering may not be reversible and may even exacerbate the problem.

No wonder then that in 2010, 193 countries during the UN Convention on Biodiversity in Japan signed to outlaw geoengineering projects, permitting only small scale scientific research studies.

The world is slow to respond to climate change mitigation. Our collective efforts are still far short of what is needed to correct the imbalance in Earth’s energy budget. The world appears to be warming unrelentingly and could even surpass the 2 degrees Celsius in warming, the tipping point where a warmer world becomes a permanent, irreparable state.

“Climate change is happening faster than our ability to respond,” observed astrophysicist Neil deGrasse Tyson. Global greenhouse gas (GHG) emissions, in particular for methane (CH4) and nitrous oxide (N2O), have increased steadily every year, with an overall increase of 91% from 1970 to 2012. Only carbon dioxide (CO2) emissions have surprisingly stalled for three years in the row in 2016. This could be due to efforts of Russia, China, Japan, the US and the EU in reducing or stabilizing their CO2 emissions.

The effectiveness of geoengineering strategies have so far not been promising. A recent study in Nature Scientific Report estimated that one popular geoengineering strategy, the ocean fertilization method, if deployed, could alter global rainfall patterns and affect water resources. The idea of ocean fertilization is simple: pump iron into the ocean and because iron stimulates the growth of plankton, the plankton would in turn absorb greater amounts of CO2; thus; “sucking out” more CO2 from the atmosphere.

But twelve ocean fertilization studies by the European Iron Fertilization Experiment (EIFEX) in 2004 have shown mixed results. Some trials showed that sequestration of CO2 was indeed increased by ocean fertilization but others none. In some cases, adding iron into the ocean failed to stimulate plankton growth. Iron, as it turns out, is only one of the many factors that stimulates plankton growth in the oceans. But even if ocean fertilization was to work perfectly, Prof. Victor Smetacek of The Alfred-Wegener-Institute, predicted that ocean fertilization would take up only one quarter of the extra CO2 deposited by human activities.

Solar geoengineering strategies suffer the same fate. Releasing sulfur-based aerosols into the stratosphere would scatter and reflect incoming solar radiation; thus, reducing the amount of solar radiation reaching the ground, even by as much as 20%. Likewise is the use of a giant space mirror (or many small space mirrors) and parking these mirrors in the sky or in space to reflect a portion of the incoming solar radiation. Both these strategies work by reducing the amount of solar radiation reaching the ground; thus, resulting in cooler air temperatures.

Ocean fertilization: Introducing iron into the ocean encourages plankton growth, and in turn, the plankton absorb CO2; thus, removing CO2 from the atmosphere and storing the carbon in the ocean (photo: www.oceanpastures.com).

Injection of aerosol into the stratosphere helps to reflect solar radiation (much like volcanic ash); thus, cooling Earth (photo: large.stanford.edu).

But the major problem with these aerosols in the stratosphere is they form sulfuric acid which eats ozone, and the use of space mirrors is prohibitively expensive and requires very advanced technology that we do not have today. Moreover, these mirrors could cause uneven distribution of solar radiation and unintended cooling on Earth. A country could get lesser solar radiation, upsetting the energy balance, and altering rainfall patterns. The net outcome may be a negative to the country’s agriculture crop yields. The consequences could be far reaching: causing economic crisis or political unrest in countries whose climate have been unpredictably affected by these space reflectors.

Installing space mirrors over Earth to reflect solar radiation (photo from www.scmp.com).

Another concern of geoengineering is it addresses only the symptoms but not the root causes of climate change. Solar geoengineering, for instance, reduces the amount of solar radiation reaching the ground but it does nothing to reduce the amount of CO2 released by human activities. Geoengineering risks being used as a “band-aid” solution to climate change and an excuse to continue with our business-as-usual polluting practices.

At the end, we need to realize that there is no all-in-one solution, no magic bullet in mitigating climate change. Geoengineering is only one option. But even then, Prof. Frank Keutsch of Harvard University, Cambridge, MA realistically puts geoengineering in its place: “Geoengineering is like taking painkillers. When things are really bad, painkillers can help but they don’t address the cause of a disease and they may cause more harm than good. We really don’t know the effects of geoengineering, but that is why we’re doing this research.”

Research on geoengineering methods needs to continue and expand to determine if they can be safely and reliably deployed. David Keith of Harvard University, Cambridge, MA and his associates, for instance, have developed aerosols that are made of calcite that are effective to reflect solar radiation but without the side effect of sulfuric acid formation which would destroy the ozone. This is a step in the right direction, but ultimately, the most important strategy against climate change is not in geoengineering but in reducing GHG emissions from human activities.

 




Dirty, rotten, immoral, godless, evil atheists

I was recently interviewed by a journalist from Free Malaysia Today (FMT), in which she asked me on what I thought about the recent 2017 study by Gervais and his associates regarding the near worldwide bias against atheists. The prejudice against atheists isn’t really news to me, but what was news was Gervais’s study reported that even atheists were found to be unconsciously biased against their fellow atheists, thinking them immoral.

My FMT interview was published today [PDF article], but sadly a great deal of what I said in the interview was not published. Consequently, the FMT article was a little emasculated. So, I think it wise to publish my full opinion and remarks here for posterity.


The strong distrust of atheists should not really surprise us. There is a tendency in many countries, including secular ones, to be biased against atheists. In the US, for instance, the Gallup poll in 2015 revealed that atheists were the second least trusted group of people, and the American people would rather have a Muslim than an atheist as their President. Bias against atheists is even stronger in highly religious countries like Malaysia, where for one to come out as an atheist can be socially very detrimental. A recent statement by a Malaysian government minister, for instance, have called all atheists, in his own words, “to be hunted down vehemently”.

The recent results from the study by Gervais and his associates are not unique because they are supported by the findings from other studies. But unlike previous studies that were smaller in scope and limited to only participants in the Western countries, Gervais’s recent study is much more comprehensive, covering over 3,000 people across 13 countries (including secular and religious countries) in five continents.

Gervais’s study reveal our preconceived notions or bias against atheists, that atheists are morally bad. But we need to be careful not to extrapolate or misinterpret Gervais’s findings to mean something they are not. They cannot be taken as evidence that atheists are indeed morally bad –- or even good. Science is not a democratic-like process. Just because the majority share the same opinion does not make the opinion factually true. Instead, science often reveals what seems at first to be common sense or intuitively right to be at the end inaccurate, if not entirely erroneous. A 2016 study by CSIRO (The Commonwealth Scientific and Industrial Research Organisation), for instance, reported that nearly two thirds of Australians said it was “common sense” that climate change was not real and even if real, not human-induced.

Studies like Gervais’s are really, at the fundamental level, asking us two questions: why is religion so important to us, and what is morality and is it only derived from religion?

How many gods have we humans worshiped, past and present? One encyclopedia of religions I read says 2,500, another 4,000 to 5,000, and if we include the various Hindu gods, one estimate even reported over 33 million gods. The world has currently over 7 billion people, and about 85% of them hold onto some sort of religious beliefs. Atheism is growing in some parts of the world, but the religious still far outnumber the atheists. Why are we humans so religious? Why has religion survived and thrived throughout human history? Some religions, like Christianity, Islam, Buddhism, and Hinduism, have persisted for centuries, but this is not true for most religions. The average lifespan of a religion is 25 years. Religions literally come and go, but our desire to worship “something” persists. Religion is not a fluke, a one-off, short random event in our human history.

Religion is evolutionary by-product of human cognition. We use religion to help us to find meaning, to make sense of our world and our purpose. Unlike animals, we have an innate propensity to find meaningful patterns out of seemingly random or chaotic events. We seek to understand how our world works, why it works –- and who caused it. It is insufficient for us just to know the “hows” and “whys”. We also seek to find explanations of events in terms of agents; that is, determining who or what have caused those events. Even children as young as three years tend to invoke supernatural reasoning to explain phenomena they do not understand. And these agents are perceived by children to act for a purpose and not by chance -– and these agents need not be visible. Children find it easier, for example, to accept that plants and animals are brought about for a reason rather than they arose by chance or for no reason. In other words, we tend to be religious rather than not.

Religion is important to many people because their identity, hopes, culture, and moral system are derived from their religion. To many people, morality is by default some very complicated code of conduct that requires supernatural definition, justification, and guidance.

Many believe our morality can only be derived from supernatural code of conduct. (c) Stéphane Bidouze @ fotolia.com

But morality is actually a very simple concept, so simple that many people find hard to believe it at first: “Do unto others as you would have them do unto you.” This so-called Golden Rule is essentially: if we want to be treated nicely by others, then be nice.

Even primates have shown to have some sense of morality too. Chimpanzees, who cannot swim, have been observed to drown in zoo moats trying to save others, and they have also been observed to console others. In a classic experiment where given the chance to obtain food by pulling a chain that would also deliver an electric shock to another, rhesus monkeys would rather starve themselves for several days than cause pain to their companions.

Morality in animals? In a classic experiment where rhesus monkeys would rather starve for several days than cause pain to their companions because pulling a food chain brings food to them but delivers electrical shocks to their companions. (c) ake @ fotolia.com.

If forsaking religion is bad, then there should be some evidence that secular societies tend to fail or be worse off than religious societies. Yet, scientific studies consistently show the opposite: that the tendency is people in secular countries, compared to those in religious ones, are more involved in charity work; are more trusting of strangers; have higher IQ scores; have lower levels of prejudice, ethnocentrism, racism, and homophobia; show greater support for women’s equality; are more appreciative of science; and have higher rates of subject well-being. Secular countries also show higher economic growth, higher democratic stability, and better governance than religious countries.

At the end, the results from Gervais’s recent study is interesting and important, for they highlight how strongly inclined we are toward religion and how many of us still see morality on a supernatural basis.

But perception is not proof. Our perception is limited by our personal experience and myopic perspective, and it is strongly influenced by our bias. Science have instead shown that the link between the absence of religion and moral deficiencies is not as clear, strong, or straightforward as the majority of us like to believe.




Ignorance: Science behind the toughest question ever asked in a Miss Universe pageant

No one watches Miss Universe pageants to exercise their intellect. But Miss Universe 2000 is one very rare and notable exception. Katja Thomsen Grien, the then Miss Uruguay, made it into the last five finalists, only to be flummoxed by what is perhaps the most difficult and profound question ever asked in any pageantry. Her question, set earlier by Miss India, Priyanka Chopra, was simply nine words long: “If ignorance is bliss, why do we seek knowledge?”

Miss Uruguay 2000, Katja Thomsen Grien, was asked what is perhaps the toughest and most profound question ever asked in any pageant: “If ignorance is bliss, why do we seek knowledge?” @ missosology.org

Miss Uruguay, whose first language was not English, struggled to understand, let alone answer the question. Nonetheless, brave Miss Uruguay, choosing not to request any help from a translator, did finally give a fairly competent answer: ignorance, she opined, was the source of world problems at that time.

I remember catching that particular Miss Universe episode on TV that time, and I remembered thinking, “Wait a minute. Ignorance isn’t bliss because ignorance means none or lack of knowledge. Really, who wants to be stupid?” Easy peasy. Question successfully answered.

Or so I thought.

Fast forward nearly twenty years later, I picked up the book “Agnotology: The Making & Unmaking of Ignorance”, a collection of academic articles, edited by Robert Proctor and Londa Schiebinger, and this book made me realize that most people’s understanding of ignorance, including mine, is incomplete, that ignorance is not a simple case of just being the opposite of knowledge.

Ignorance is not just an absence of knowledge but can take various forms. Ignorance is sometimes desired. Manufactured ignorance is another form of ignorance to deceive or hide truths (c) olly @ fotolia.com

Contrary to common belief, ignorance does not always mean an absence of knowledge. Ignorance can include false knowledge – and in certain cases, ignorance is actually good and desirable (and, yes, bliss too) and even our right to have.

Ignorance appears in several forms, one of which is inherent in science. Ignorance is a resource that drives science. We humans are naturally inquisitive creatures. We are creatures uncomfortable in our ignorance: of not knowing or knowing too little. Our ignorance prompts us to inquire, to observe and collect information, and to understand – and science is a methodological manner by which we use to reduce our ignorance. The whole point of science is to fill in gaps in knowledge – but ignorance cannot be completely eliminated, and very often, as any scientist can attest, more knowledge actually begets more ignorance.

Socrates famously once said, “The more you know, the more you realize how little you know.” We answer some questions only to realize there are even more questions to answer. But this is not to say science is a worthless pursuit. Far from it. Science is forever pushing forward the boundaries of knowledge. It is science, not religion or other superstitions, that has revealed more about us, our environment, and our history and possible futures.

Ignorance can also be a product of deliberate omission. We cannot, for instance, possibly understand, know, or focus on everything. Science continuously chips away at our ignorance, but by choosing to focus or study on certain aspects of our ignorance, we inevitably leave some of our ignorance unanswered and unexplored. Over time, Proctor and Schiebinger remarked, the price of our selection is “lost knowledge”: we become ignorant of what we do not know.

But not all knowledge is good: some are dangerous. There are many examples which we wish we could put back the “genie in the bottle”. Knowledge that enabled us to create nuclear or biological weapons,  knowledge about torture, and unethical animal or human studies are only some examples about which we wish we had remained ignorant.

So, in some cases, ignorance is good because it protects us. Consider military and other sensitive national information – or even our personal information – that could otherwise be potentially used against the country and us if they are revealed. National secrecy laws are strictly enforced to maintain ignorance. Such secrecy laws keep the country and us safe and secure. Individual privacy laws are even seen as a basic human right by many countries. Complete knowledge is not always desired in all cases.

Manufactured falsehoods are the final form of ignorance, where false information are deliberately created with hidden agendas to confuse or mislead people. History and even current events are replete with examples of manufactured ignorance to mislead people regarding the truth such as about smoking causing cancer, air pollution causing acid rains, CFC (chlorofluorocarbon) destroying the ozone, and recently, human activities causing global warming.

Manufactured ignorance is a strategic and hidden ploy, often driven by greed, to counter the truth because exposing the truth would disrupt the profitable status quo of the political or business environment. Manufactured ignorance work by sowing seeds of doubt by obfuscating facts and cherry picking evidence. It works to prolong the debate on issues by creating or amplifying disputes or controversies.

“Doubt is our product.” Tobacco industries worked hard to manufacture ignorance by obfuscating facts to prolong the debate as long as possible the risks of smoking. (c) Artem Furman @ fotolia.com

“Doubt is our product,” so wrote one tobacco executive in a leaked memo, “since it is the best means of competing with the ‘body of fact’ that exists in the minds of the general public. It is also the means of establishing a controversy.”

Tobacco and fossil fuel industries and their lobbyists are guilty of manufacturing ignorance to protect their investments and profits because the truth is dangerous, inconvenient truths that would detrimentally affect their business dominance and profits. But governments too stand guilty. Our leaders manufacture ignorance to protect their power and positions against their scandals, corruption, misdeeds, and incompetence, and even manufacture ignorance to attack those who oppose them.

Fossil fuel industries today continue what the tobacco industries have been doing for the past four decades, by sowing seeds of doubt about the science of climate change, claiming more evidence are needed. (c) spiritofamerica @ fotolia.com

The current US President, Donald Trump, exemplifies a leader who frequently lies and embraces ignorance, leading Jake Tapper of the CNN to remark about Donald Trump, “I’ve never really seen this level of falsehood … it’s conspiracy theories based on nothing.” Even the former US President, Barack Obama, mocked Trump by saying: “Ignorance is not a virtue.”

What is worrying that despite calling out Trump, his exaggerations, outright lies, and ignorance are becoming acceptable ar at least tolerated by a great deal of the American public.

The ubiquity of the internet and social media, for example, has made the spread of ignorance faster and more frequent. Fake news and conspiracy theories, automatically taken as truths, are clicked, read, and quickly shared. In a moment’s notice, alternative facts are spread into every corner of the world and enforced to the point that it is becoming harder today to distinguish facts from fiction. The explosion of online (and independent) news channels have helped to present alternative viewpoints, but just as there are more of them, many others proliferate that deliberately present falsehoods, driven by hidden agendas, to spread and enforce ignorance.

“Agnotology: The Making & Unmaking of Ignorance”edited by Robert Proctor and Londa Schiebinger (Stanford University Press, 2008).

Published nearly ten years ago, Robert Proctor and Londa Schiebinger’s book, “Agnotology: The Making & Unmaking of Ignorance” is a fascinating read. The science of ignorance, or agnotology, coined by Robert Proctor of Stanford University, is a field yet to be established, but it is perhaps timely to formalize the study of ignorance especially today.

“If ignorance is bliss, why do we seek knowledge?”

Short question, true, but in its brevity and apparent simplicity, hides a profound, thought-provoking intellectual exercise.




We are not special

Let’s face it. We are not special. We like to think we are, that our goals, rants, aspirations, and struggles really matter. But we are stardust, as Neil deGrasse Tyson reminds us.  Sounds poetic but it is also true. We are made up of molecules constructed from the crucibles of stars from deep space. When these stars exploded, they ejected their elements, becoming building blocks upon which increasingly heavier elements could be formed and finally combining with one another to form matter: new stars, planets – and, yes, little us too.

Look at Earth, our home. A pale dot amidst billions and trillions of other planets out there. A mote of dust, as the late Carl Sagan remarked. And if the entire 4.6 billion years of Earth’s history was condensed into a 24-hour clock, humanity’s history would emerge only less than two minutes before midnight. That is how insignificant we are compared to the grand scheme of the universe. Our 80-or-so years of life on Earth is but a negligible fraction of time.

“That’s here. That’s home. That’s us. On it everyone you love, everyone you know, everyone you ever heard of, every human being who ever was, lived out their lives. There is perhaps no better demonstration of the folly of human conceits than this distant image of our tiny world.” – Carl Sagan, in his 1994 book, “The Pale Blue Dot: A Vision of the Human Future in Space.”

But we like to be extraordinary. Today’s sages tell us to. They feed upon our narcissism that yearns to be extraordinary, to do the extraordinary, and to live extraordinary lives. But the advice to be extraordinary is itself contradictory. If everyone was extraordinary, then no one, by definition, would be extraordinary because no one would stand out from the rest.

So, yes, we are not special.

But that should not depress us. Instead, it should drive us to appreciate that our time on Earth is very short and finite. We may not be special, that on the scale of the universe, we are insignificant and our lives a fleeting moment in history, but this does not mean our lives should not matter. The idea that we are not special should humble us. It should challenge us to re-orientate our lives to make it count with what little time we have left, that our lives will make a significant impact on those around us. Because we have lived, others have been changed and have benefited.

So, what then is our purpose in life? What is our legacy, our immortality project? Our life’s purpose is a compass that helps us to distinguish between the important, trivial, and irrelevant in our lives. It separates the wheat from the chaff. It distinguishes between struggles and aspirations that matter, those that deserve our full energy, attention, time, and money and those that we should ignore or at least, emphasize less. Our purpose in life liberates we because it provides us guidance, that we are dedicating our lives on goals or pursuits more noble than ourselves.

But it is not all psychology and pep talk. Having a strong purpose in life cascades down to even at a biological level. A 2013 study by Steve Cole from the University of California found that people with more hedonic lifestyles had genetic expressions similar to those seen in people suffering from loneliness and stress, compared to those with people choosing more eudemonic lifestyle, a life driven beyond self-gratification. And brain scans of people with a higher eudemonic lifestyle showed lower stress response than those with lesser eudemonic lifestyle. In other words, people with long term life purpose live longer and are healthier.

But thinking about our purpose in life, let alone setting one, is hard. It is scary – and as blogger Mark Manson wrote in his book ”The Subtle Art of Not Giving a F*ck”, we don’t do it because we have no clue what we are doing.

The late Steve R. Covey in his book “The 7 Habits of Highly Effective People” probably said it best on how we can find our purpose in life: “[Imagine attending your own funeral] … What would you like each of the speakers to say about you and your life? … What character would you like them to have seen in you? What contributions, what achievements would you want them to remember? … What difference would you like to have made in [people’s] lives?”

The Subtle Art of Not Giving a F*ck by Mark Manson.

Our deaths are inevitable, but rather than dreading it, our deaths should warn us of wasting our lives. But change is difficult and fraught with pain, suffering, and struggles. Athletics, for instance, are willing to bear the tedium and pain of training because they know the outcome of their struggles is becoming fitter, stronger, and faster. No one likes pain, but people are willing to face and endure it provided the outcome is worthwhile and fulfills their purpose in life.  Mark Manson says it best: our self-worth isn’t a measure of how we feel about our positive experiences but about how we feel about our negative experiences. Pain is telling us to pay attention and to learn. Our pain, if we respond correctly and are willing to learn, initiates meaningful change. Trying to pursue a pain-free life is instead foolish because it avoids learning and meaningful change, and it leads to inconsequential and perhaps even selfish, self-indulgent lives.

Achieving the extraordinary is then not a target by itself but an outcome, perhaps even by accident, due to our pursuit of our aspirations. We may dedicate our lives in helping the poor, for instance, and our efforts might gain us recognition, awards, and even a celebrity-like status, but they are an outcome, not the goal, of our purpose.

Why am I here? (c) freshideas @ fotolia.com

But what characterizes a meaningful life purpose? Obviously, identifying one’s purpose in life is highly specific to individuals. Mark Manson however offers that a person’s purpose in life should encompass good values, and such values are those that are reality-based, socially-constructive, and immediate and controllable. Honesty is an example of a good value, says Mark, because it is real, it benefits others, and it is under our control, whereas popularity isn’t because it is out of our control (i.e., we need to convince others to like us), may not be real because people may not really see us like we want them to, and being popular is, well, selfish, indulgent, and does little to help others.

Alfred Hitchcock once said, “Drama is life with the boring bits cut out.” So, if our lives were to be made into a TV drama, what would our story be, after all the boring, doldrums bits of our lives cut out? Did our lives matter?

References

  1. Burrell, T. 2017. Why am I here? New Scientist. 28 January 2017. p. 30-33.
  2. Fredrickson, B.L., Grewen, K.M. Coffey, K.A., Algoe, S.B., Firestine, AM., Arevalo, J.M.G., Ma, J., and Cole, S.W. 2013. A functional genomic perspective on human well-being. Proceedings of the National Academy of Sciences 110, no. 33: 13684-13689. [link]
  3. Manson, M. 2016. The subtle art of not giving a f*ck. A counterintuitive approach to living a good life. New York: HarperOne.



Malaysians use 3 billion plastic shopping bags per year, so why is limiting or even banning their use still a grossly inadequate strategy?

Intuitively, it seems a good idea to charge Malaysian shoppers for the use of plastic shopping bags to reduce our nation’s plastic wastes.

A study published in Science by Jambeck and his associates in 2015, for instance, estimated that, out of 192 coastal countries in the world, Malaysia is the eighth largest producer of mismanaged plastic wastes.  (wastes that are not adequately disposed or recycled). This study estimated that in 2010 Malaysia had produced 0.94 million tons of mismanaged plastic wastes, of which 0.14 to 0.37 million tons may have been washed into the oceans. Thirteen percent of Malaysia’s solid wastes are plastics, of which 55 percent are mismanaged.

Malaysia is the 8th larger producer of mismanaged plastic wastes in the world. It is estimated that between 0.14 to 0.37 million tons of our plastics may have been washed into the oceans. (c) aryfahmed @ fotolia.com

But how much of these mismanaged plastic wastes are from plastic shopping bags? Unfortunately, no rigorous study has been conducted to determine this amount – or even how many plastic shopping bags Malaysians use in a year. For the latter, various estimates do exist, but they vary widely with one another, swinging from a total of 9 to 22 to even a whopping 55 billion plastic shopping bags per year.

That Malaysians use 55 or even 22 billion shopping bags in a year seems disproportionately very large, especially when you compare our use with other countries. All 27 countries in the European Union, with a combined population of about 500 million, used a total of 86 billion plastic shopping bags in 2010 – but Malaysia’s population is only 6 percent of EU’s. The problem with Malaysia’s estimates is that most of them, if not all, were derived from informal observations at supermarkets.

So, how many plastic shopping bags do Malaysians use in a year? To answer this question, I used the internet to scour for data of plastic shopping bags use by various countries along with the countries’ respective GDP (Gross Domestic Product) (Table 1). I figure that countries with a higher economic activity or growth would lead to a higher use of plastic shopping bags. Some countries, however, do have existing plastic shopping bags bans or charge, so to get a better representation between GDP and plastic shopping bags use, I did not consider countries that have some forms national plastic shopping bags bans or charge, such as Taiwan, Australia, UK, Canada, Ireland, Estonia, Bulgaria, UK, Germany, and Denmark.

Table 1. Number of plastic shopping bags (PSB, in billions) used per year by countries
CountryYearPSB (billion)
Australia20124.00
Austria20100.36
Belgium20101.03
Brazil201212.00
Bulgaria20101.80
Canada20123.00
Cyprus20100.14
Czech Republic20103.07
Denmark20100.01
Estonia20100.62
EU-27201086.40
Finland20100.01
France20105.02
Germany20105.05
Greece20102.67
Hong Kong20129.80
Hungary20104.63
Ireland20120.07
Ireland20100.07
Israel20122.00
Italy201010.52
Japan201230.00
Latvia20100.97
Lithuania20101.43
Luxembourg20100.01
Malta20100.04
Morocco20103.00
Netherlands20101.15
New Zealand20120.87
Norway20121.00
Poland201017.68
Portugal20104.89
Romania20105.00
Singapore20133.00
Slovakia20102.50
Slovenia20100.95
South Africa20128.00
Spain20105.44
Sweden20100.90
Taiwan20125.80
UK20109.69
US201290.00

The result is what you see in Fig. 1.  It turns out that there is a linear relationship, albeit a weak one, between GDP and plastic shopping bags use.

Fig. 1. Relationship between the annual number of plastic shopping bags used and gross domestic product (GDP) of countries.

Using the linear regression equation and Malaysia’s mean GDP for the past five years from 2011 to 2015, I estimate that Malaysians use a total of 3 billion plastic shopping bags, rounded to the nearest 1 billion, per year. This number is incidentally the same as for our immediate neighboring country, Singapore. But since Singapore has a smaller population than Malaysia, this means a Singaporean use nearly six times more plastic shopping bags per year than a Malaysian. However, Singapore is ten times more efficient in managing their plastic wastes compared to Malaysia. As stated earlier, 55 percent of Malaysia’s plastic wastes are inadequately disposed or recycled, compared to Singapore’s outstanding five percent. So, despite Singapore’s greater use of plastic shopping bags per capita than Malaysia’s, Singapore’s mismanaged plastic wastes per capita are actually 28 times lower than that for Malaysia’s.

Malaysia’s nationwide No Plastic Bag Campaign Day every Saturday and similar such campaigns elsewhere in the country are unfortunately a knee-jerk response to our country’s waste management problems. Limiting plastic shopping bags use will indeed reduce plastic wastes but one question we often neglect to ask is: “What is the alternative to plastic bags?” We still need to carry home our purchased items.

Research have shown that plastic bag alternatives such as paper bags and cotton tote bags are actually more environmentally unfriendly than plastic bags. One of the most comprehensive studies, published by the Australian government in 2007, showed that paper bags, because it is thicker than plastic, have a higher carbon footprint than plastic bags. Also because of paper’s greater thickness and weight than plastic, paper take up more space in trucks and transport vehicles would burn more fuel transporting paper than plastic. A 2011 research by a British agency estimated that a paper bag has to be used by at least four times to equal the carbon footprint of that a conventional plastic bag.

Existing alternatives to plastic bags have far higher negative environmental impact than conventional plastic bags. Instead of paper or cotton tote bags, good alternatives are bags made from recycled plastics (photo from greenyatra.org).

Another alternative is the cotton tote bag but which fares even worse because cotton is a resource-hungry crop. Less than three percent of the world’s cropland is cotton, but yet cotton accounts for about one-fifth of the global market of insecticides and one-tenth of pesticides. Moreover, to produce one kg of cotton requires 20,000 L of water. A cotton tote bag is estimated to require an average of at least 150 number of reuse to equal the environmental impact by a single plastic bag. This number of reuse is nearly 40 times higher than that for a paper bag.

In other words, replacing plastic shopping bags with existing alternatives may be a case of reducing one problem but greatly exacerbating another.

So, yes, charging for plastic shopping bags use is a good idea but only because it raises awareness among the public about our fragile environmental, but this strategy cannot stand alone. It cannot be the onus of the Malaysian public to fight the large amount of plastic wastes our country generates every year. To do this is to ignore the larger problem.

As stated earlier, Malaysia’s has a large plastic disposal and recycling problem, where 55% of our plastic wastes are mismanaged. The key strategy is then to increase our recovery of plastic wastes through greater reuse of our plastics. Malaysia’s economy, like any middle high income countries, is growing rapidly but this growth is not matched by greater effectiveness of managing our wastes.

Unfortunately, how much Malaysia recycles is also uncertain. Estimates vary from none (0 percent) to 17  percent. A 2011 report for the Malaysian Ministry of Housing and Local Government put our country’s annual recycling rate at only 7 kg of wastes per capita. If this estimate is accurate, this means our country recycling rate is less than 2 percent, placing Malaysia at the lower end of the countries in the world that practise nearly no waste recycling. Also consider the following: the average recycling rate of the top 20 countries with the highest recycling rate in the world is 35%, nearly 20 times higher than that for Malaysia’s. Austria and Germany are two countries with the highest recycling rates in the world, both countries recycling about 62 percent of their wastes (Fig. 2). And again, our nearest neighbor, Singapore, has one up on us. Singapore is the fourth highest recyclers in the world, with an impressive recycling rate of 59 percent.

Fig. 2. Countries that most recycle their wastes (2015).

What then can Malaysia do?

Limiting plastic shopping bags through outright bans or charging for their use is vastly inadequate. Why? As mentioned previously, Malaysia generates about 0.94 million tons of mismanaged plastic wastes a year, and I estimated we use approximately 3 billion plastic shopping bags a year. A plastic shopping bag weighs between 4 to 7 g, so taking the upper limit of 7 g, this means 3 billion plastic shopping bags would weigh a total of 21,000 tons. So even if we cut down our plastic shopping bags use to completely none (i.e., zero plastic shopping bags use), we would have only reduced our mismanaged plastic wastes by a maximum of only about 2 percent.

What Malaysia needs to do is then to greatly increase our recycling of plastic wastes. The same 2011 report for the Ministry of Housing and Local Government recommended the following for greater plastic wastes management in the country:

  1. More tax incentives be given for companies that undertake waste recycling management,
  2. Members of the public be given rewards and redemption for turning in recyclable plastics,
  3. Companies should be encouraged to buy back their plastics, such as buying back empty plastic bottles, containers, wrappings, and other forms of packaging,
  4. Recycling infrastructure in the country should be improved, and
  5. Innovation on the use and reuse of plastics should also be prioritized.

Our efforts ought to be diverted so that we instead recycle more of our plastic wastes, rather than just limiting our use of plastic shopping bags. (c) Aisyaqilumar @ fotolia.com

Unfortunately, recycling is not only unpopular but poorly implemented in Malaysia. Starting September 2016, for instance, households in Kuala Lumpur, Putrajaya, and several other states would have to separate their solid wastes into three categories: paper, plastics, and miscellaneous (which includes glass, metal, and organic wastes). Failure to do so risks a penalty of between RM50 to 500. This is a positive step in the right direction, but it suffers from poor implementation.

Despite good intentions, recycling of trash is poorly implemented in Malaysia. Malaysian households, despite mandatory instructions to separate their trash, still have their separated trash dumped together (photo from nst.com.my).

My family and I live in KL, and we have been separating our wastes as per given instructions since Day 1, but until today, our three plastic bags, each containing the separate groups of wastes, are still collected and dumped together. We are disheartened to see that despite our efforts to separate our wastes as instructed, our three bags are still treated as equal and dumped together.

So, Malaysia needs to identify and rigorously implement the most effective solutions to reduce our plastic wastes. Limiting the use of plastic shopping bags is a good start, but alone, it is grossly an inadequate strategy. Cliché it may be, our strategy can simply be summarized as this: reduce, reuse, and recycle.

Rather than focusing so much of our energies on limiting the use of plastic shopping bags, Malaysia needs instead to greatly increase the recycling of our plastic wastes. (c) aryfahmed @ fotolia.com

Update (2 Mar. 2017): A condensed form of this article was published today in the New Straits Times newspaper [link].

References

  1. Are plastic-bag bans good for the climate? by Ben Adler, Jun 2, 2016 (http://grist.org/climate-energy/are-plastic-bag-bans-good-for-the-climate/)
  2. Billions of plastic bags still being used (http://www.thestar.com.my/metro/community/2016/08/22/billions-of-plastic-bags-still-being-used-six-years-have-gone-by-since-the-government-launched-the-n/#MhaHPEvvUByM2ywv.99)
  3. Edwards, C. and Fry, J.M. 2011. Life cycle assessment of supermarket carrier bags: a review of the bags available in 2006. Report: SC030148. Environment Agency, Bristol.
  4. Environment: Commission proposes to reduce the use of plastic bags (https://www.euractiv.com/section/sustainable-dev/news/eu-to-halve-plastic-bag-use-by-2019/)
  5. Golden Ecosystem Sdn. Bhd. 2011. A study on plastic management in Peninsular Malaysia. Report for the National Solid Waste Management Department, Ministry of housing and Local Government Malaysia. Golden Ecosystem Sdn. Bhd., Petaling Jaya.
  6. Jambeck, J.R., Andrady, A., Geyer, R., Narayan, R., Perryman, M., Siegler, T., Wilcox, C. and Lavender Law, K. 2015. Plastic waste inputs from land into the ocean, Science, 347: 768-771.
  7. Managing KL’s rubbish (http://www.thestar.com.my/metro/community/2016/05/30/managing-kls-rubbish-residents-in-the-city-are-more-conscious-of-the-amount-of-waste-they-generate-a/)
  8. Miller, R.M. 2012. Plastic Shopping Bags: An Analysis of Policy Instruments for Plastic Bag Reduction. MSc. Sustainable Development Thesis. Universiteit Utrecht, Netherlands.
  9. Recycling rates worldwide in 2015, by select country (https://www.statista.com/statistics/516456/rate-of-recycling-worldwide-by-key-country/)
  10. The good and the bad of plastic bag bans: Research review (https://journalistsresource.org/studies/environment/pollution-environment/plastic-bag-bans-grocery-shopping-environment)



Root of all evil: How agriculture became our bane and worst mistake

In 1987, esteemed Professor of Geography, Jared Diamond, stunned many people with his article in the Discover magazine entitled, “The worst mistake in the history of the human race” in which he argued that agriculture, far from being a blessing, was instead our worst mistake. Agriculture has indeed changed the world – but for the worst, causing gross social and gender inequality and increases in malnutrition, starvation, and epidemic diseases. In some ways, Jared Diamond added, pre-agriculture societies were actually better off than post-agriculture societies. UK newspaper, The Telegraph, went as far as to ask in their article in 2009: Is farming the root of all evil?

How could that be? Could agriculture really be our worst mistake, the root of all evil?

No one is certain exactly how agriculture started, only that agriculture started around 10,000 years ago independently and almost simultaneously at six main locations in the world. Charles Darwin, in his book, “Descent of man, and selection in relation to sex” (published in 1871), casually speculated that agriculture may have started when humans observed that seeds fallen to the ground have gone on to sprout and grow into plants that had desirable qualities.

But it wasn’t until the late 1970s that archaeologist Mark Cohen of the State University of New York at Plattsburgh suggested that agriculture started probably more out of desperation than inspiration. Evidence suggest Cohen could be right: that rising human populations, combined with a cooling and drying climate, left pre-agriculture societies short of food. People became desperate and started to grow their own food, rather than depend on the unstable food supply via hunting and gathering.

Considering that modern humans appeared about 250,000 years ago, agriculture is consequently a very recent human discovery – and a momentous discovery too. The start of agriculture is undoubtedly a very important milestone, for better or worse, in modern human history for several reasons.

Agriculture is the foundation upon which all human civilizations, past and present, from the least to the greatest, are built. Every civilization, without exception, begins near rivers for a simple reason. They required easy access to freshwater to feed their crops and animals. Ancient Egypt and Nubia civilizations, for instance, begun along the Nile River in North Africa, and the Yellow River in China was the birthplace of the Xia, Shang, and Zhou Dynasties. Likewise, the Harappan civilization begun along the Indus River and the Mesopotamian civilization along the Tigris-Euphrates River.

From the least to the greates human cilizations, each one of them started off with agriculture (c) Pius Lee @ fotolia.com

Agriculture is the foundation of every human civilization, from the least to the greatest. Only with agriculture could a civilization expand its population to large numbers quickly and to develop complexity and sophistication in its culture and socioeconomic and political structures (c) Pius Lee @ fotolia.com.

Agriculture allowed humans to stop moving from place to place in search of food and to settle down permanently in one area. This carried important consequences. Agriculture provided humans stability. And stability meant humans could increase their populations to large numbers and to do it very rapidly. Before agriculture, humans depended on hunting animals and gathering of fruits for food. Such a lifestyle would simply not be able to sustain a large population all year round.

It is estimated that the world population, without agriculture, could not exceed 150 million people. But today the world population stands over 7 billion people, nearly 50 times more than what a hunter-gatherer world could cope.

Hunter-gatherer societies or tribes are small, nomadic, and austere (c) marziafra @ fotolia.com

Hunter-gatherer societies or tribes are small, nomadic, and austere (c) marziafra @ fotolia.com.

Having stability also meant post-agriculture societies could develop increasingly complex and sophisticated culture, education, and socioeconomic and political structure. Human skills, no longer limited to just hunting and gathering, became more diverse, specialized, technical, creative, and methodological. Because of agriculture, societies could now comprise a myriad of professions such as teachers, doctors, politicians, musicians, artists, engineers, farmers, and builders.

But a farm is no Garden of Eden. Agriculture has several important and serious drawbacks. In recent years, anthropologists have quietly revised the view that the outcome of agriculture, rather than a blessing, was more of a fall from grace. Why?

Because of agriculture, we have inadvertently traded quality of our food for quantity. True, agriculture has allowed us to produce abundant food consistently, but agriculture has also limited the types of food we eat. This in turn caused higher incidences of nutrient deficiencies and unbalanced diets. Hunter-gatherers, for instance, ate a much more varied of food, as many as 60 to 70 types per year. But once we converted to agriculture, we became dependent on a much smaller number of food types.

Today, for instance, half of our daily calories come from only three crops: rice, wheat, and corn. Without these three grain crops, we would truly have difficulty in fulfilling our daily calories. Such staple foods are rich in carbohydrate but low in protein and do not contain the essential nutrients in sufficient amounts for a healthy life. Having to depend on a very limited number of crops means we are vulnerable to food shortages and society upheavals should our crops fail from drought or pest and disease attacks, for example.

The Great Famine in Ireland between 1845 to 1879 highlights such a case. The Irish’s over-reliance on a single crop (potato) as their staple diet and the lack of genetic diversity in the planted potatoes meant that when the potato blight attacked in the 19th century, the blight disease caused devastating and widespread losses to their food supply. Mass starvation ensued during which a million people either died from starvation or famine-related diseases and another one million people emigrated. Even those of whom emigrated, it is estimated that one in three still lost their lives.

Examinations of human skeletons in the Nile Valley, Egypt showed that the hunter-gatherers who lived there some 13,00 years ago had lower signs of malnutrition and illness (as indicated by their teeth) by as much as 40% than their farming successors 1,000 years after they had adopted agriculture. Furthermore, the average height of a hunter-gatherer was 5’ 8’’, but when agriculture was practiced, the average height of people fell by four inches.

Examination of human skeletons showed that hunter-gatherers were actually more healthy and longer-living than their early farming successors (c) ymgerman @ fotolia.com

Examination of human skeletons showed that the hunter-gatherers were actually more healthy and longer-living than their early farming successors (c) ymgerman @ fotolia.com.

Such discoveries in Nile Valley in Egypt are not unique. Skeletons in Greece and Turkey showed similar signs. Prior to agriculture, the average height of a hunter-gatherer there was 5’ 9’’ for men and 5’ 5’’ in for women, but after agriculture, people’s heights fell by nearly half a foot on average. Yet again, people’s health deteriorated as a result of agriculture, where early farmers, compared to their hunter-gatherer predecessors, had 50% more enamel defects which is indicative of malnutrition, four times more iron-deficiency anemia, three times more bone lesions which are indicative of infectious diseases, and an increase in degenerative spine conditions which are indicative of harder, more physical labor. Even life expectancy fell from 26 years for hunter-gatherers to 19 years for people in the early post-agriculture period.

And the fact that agriculture allowed humans to settle permanently in one area and in large numbers and in crowded spaces encouraged the occurrence and spread of infectious diseases and pestilence. Keeping farm animals close to people further exacerbated the risks of epidemic diseases.

Besides encouraging malnutrition, starvation, and epidemic diseases, agriculture worsened social divisions and inequality. Research from the 1960s to 1970s showed anthropologists such as Richard Lee (University of Toronto) and the late Yehudi Cohen (then Rutgers University) that hunter-gatherer societies were more egalitarian and consensus-based. Food was not always available and whatever food that were available were consumed quickly; little were stored. Such survival conditions meant that hunter-gatherers had to closely depend on one another for finding food; thus, cooperation, sharing, and mutualism were essential in such societies.

But with the adoption of agriculture, food became abundant, so much that now not everyone needed to be involved in obtaining food. The society eventually divided into food producers and non-producers. Skills became diversified and specialized, some of which were more useful and more sought after than others. Distribution of wealth became more disproportionate, depending on how well one could control the production and distribution of resources. Social hierarchy gradually evolved and became institutionalized, polarizing groups of people, creating the haves and have-nots, the elites and peasants, the rich and the poor. Social inequality was inevitable and that meant some people had more food and were consequently in better health than others.

Examinations of skeletons from the Greek tombs at Mycenae around 1500 B.C. suggest that royal members had a better diet than the commoners, since the royal members were two to three inches taller and had better teeth than the commoners. Likewise, Chilean mummies around the year 1000 showed that the elites were healthier, as indicated by their lower bone lesions by as much as four times, than the peasants.

There have even been suggestions that agriculture created gender inequality, or at least made it worse. In farming, it is the women who often have the harder, more physical labor than the men. Frederick Engels, the German philosopher and social scientist, remarked nearly 150 years ago that farming was the onset of social and women inequality and the time when political innocence was lost.

Agriculture may have worsened gender inequality. Women often had the more labor-intensive jobs than the men in the farms (c) cronopia @ fotolia.com

Agriculture may have worsened gender inequality. Women often had the more labor-intensive, back-breaking jobs than the men in the farms (c) cronopia @ fotolia.com.

Agriculture, together with forestry, are today responsible for a third of the world’s total greenhouse gases (GHG) emissions – gases that are responsible for global warming. In 2003, William Ruddiman of the University of Virginia proposed that it was the start of agriculture about 10,000 years ago, not the start of Industrialization period in the early 18th century, that started the detrimental climate change which we now experience today. Ruddiman could well be right. Atmospheric levels of carbon dioxide (CO2) and methane (CH4) have risen steadily since 8,000 and 5,000 years ago, respectively. Their rise in atmospheric levels are consistent with the timeline of farming intensity. Ruddiman proposed that large scale land clearing and expansion of irrigation have been increasing GHG emissions ever since farming begun. A study in 2011 by Dorian Fuller of the University College London suggested that the expansion of rice and livestock could be responsible for the additional atmospheric methane levels 1,000 years ago.

The litany of detrimental effects due to agriculture activities is long. Climate change is only one of them. Loss of biodiversity and environment damage due to land clearing and farming activities are two more.

Talk about returning to our hunter-gatherer roots is pointless. Even if we could reset history and have humans return to their pre-agriculture days, it is most likely that nothing would change: that humans would again discover and practise agriculture. Agriculture is not a random event, started spontaneously out of chance. As discussed previously, agriculture occurred not once but six times around the world, independently of one another and nearly simultaneously. In other words, agriculture was inevitable. As human populations grew, humans simply needed another way to obtain their food in a more reliable and effective manner.

Do we want to return to a hunter-gatherer life anyway? A hunter-gatherer life was hardly romantic or idealistic but arduous, short, and ruthless. Violence was common in such societies. Two-thirds of hunter-gatherer societies were in constant warfare, and nearly 90% of them would go to war at least once a year. The death rate due to tribal warfare was about 0.5% of the population per year, as calculated by Lawrence Keeley of the University of Illinois. This rate is equivalent to 2 billion people dying during the 20th century. Other research estimated that 15% of young men in hunter-gatherer societies were murdered, and Richard Wrangham of the Harvard University calculated that more people had died before than after the advent of agriculture.

Incessant innovation is our intrinsic characteristic. We cannot help but innovate. Agriculture is only one of our innovations, as means to obtain food more reliably and abundantly. Without agriculture, nearly all of our innovations we see today would not have been possible.

Yes, our innovations have caused us problems and crisis, often as unintended side-effects, but our innovations have also brought much benefits to improve our quality life. Health during the early periods of agriculture may be worse off than that before the advent of agriculture, but today, health has greatly improved due to better knowledge and more effective resource management.

Agriculture practices today too have changed, no longer solely focusing on profits and productivity but also on adopting sustainable practices to reduce agriculture’s negative impacts on the environment and society. Zero burning (during land clearing practices), mixed farming, organic agriculture, permaculture, intercropping, crop rotation, minimum soil tillage, mulching, composting, and biological pest control are only some of our agriculture innovations to reduce our energy use and detrimental impacts.

Intecrop of maize and rice in the Daklak, Vietnam (c) xuanhuongho @ fotolia.com

Agriculture practices today are moving towards greater sustainability to reduce agriculture’s detrimental impacts on the climate, environment, and society. This photo shows an intecropping field of maize and rice in Daklak, Vietnam (c) xuanhuongho @ fotolia.com.

There is no turning back; only onward. So, whether agriculture is for our better or worse would very much depend on how we respond to agriculture and its consequences.

References

  1. Diamond, J. (1987). The worst mistake in the history of the human race. Discover, May 1987, pp. 64-66.
  2. O’Connell, S. (2009). Is farming the root of all evil? The Telegraph, June 23, 2009.
  3. The Economist (2007). Noble or savage? The Economist, Dec 19, 2007.
  4. Tollefson, J. (2011). The 8,000-year-old climate puzzle. Nature online, March 25, 2011.



Preparing and surviving your Master’s or PhD’s viva voce (oral exam) in Malaysian universities

Viva voce, or simply called viva, is the most anticipated stage of your research postgraduate study because this is the stage where you will defend your research work under the intense scrutiny of several examiners, which may include examiners from outside your university, even from overseas universities. These examiners will determine if your work is credible, important, scientifically robust, and worth the level of your study (Master or PhD). Most of all, your examiners will determine if your research have adequately extended the frontier of knowledge.

A successful viva is the final major hurdle of your postgraduate study, after which, and providing you do all the corrections as recommended by the examiners, you will at long last be awarded the coveted Master or PhD degree.

Viva is an oral examination different from a written or take-home exam where for the latter, you can delay or research further before answering any difficult questions. In your viva, however, you have no such options. Consequently, you must thoroughly understand your work and be able to convince to your examiners on your expertise, confidence, intelligence, competency, and maturity in your research work.

There is no universal operating procedure for conducting a viva. Different universities will conduct their viva differently, but regardless of the exact viva procedure in your university, you will find that all universities share several commonalities with one another in several respects. Your viva will have examiners who will critically examine your work. Your viva will require you to do a short oral presentation of your research prior to your examination. And most importantly, the criteria on what determines a successful viva candidate are very similar across all universities.

I have been a university lecturer at Universiti Putra Malaysia (UPM) for many years and have participated in many vivas. My postgraduate students too have undergone their vivas, and I am glad to report none of my students under my main supervision have faltered in their vivas. All of them have successfully graduated.

This blog article is to share my experience with you on how to prepare and survive your viva in local Malaysian universities.

Your internal and external examiners

There are two types of examiners: internal and external. Internal examiners are people from within your university, whereas external are those from outside your university. External examiners can also be from a foreign university. Some universities make it mandatory that external examiners be those who are attached with a foreign university, whereas some are more flexible: allowing external examiners to be appointed from another local university.

One of the viva rooms at Uni. Putra Malaysia. Seated around the table would be examiners, your supervisor, and -- of course, you!

One of the viva rooms at Uni. Putra Malaysia (UPM). Seated around the table would be your examiners, your supervisor, and, well, you!

 

Teleconferencing for the external examiner can be made possible for your viva.

Teleconferencing with your examiners is possible at my university, UPM.

For my university, UPM, Master’s students will have one internal examiner and one external examiner, where the external examiner can either be from a local or foreign university. For PhD students, they will have two internal examiners and one external examiner, and the external examiner must only be from a foreign university. Your main supervisor selects your viva committee, but the final selection requires approval from the faculty level, then the university graduate school before formal appointments can be made.

To succeed in your viva, you must convince your examiners that: 1) your thesis is your own work (plagiarism and data fabrication are very serious offenses that can return to haunt you even years after your graduation), 2) you understand what you have done and written, 3) you are aware where your work sits in relation to the wider research field, 4) your work is of sufficiently high standard (in terms of scientific rigor and writing maturity), and 5) you can defend your work in response to the examiners’ questions.

What some students fail to fully realize is your examiners will evaluate your research strictly based on your written thesis and face-to-face interaction (question and answer) with you during your viva. Your examiners will not contact you or any members of your supervisory committee prior to your viva to obtain further explanation or to clear up any confusion about your work.

What this means is your thesis must robust enough as a standalone defense that adequately explains the problem and justification of your work, the purpose of your work, how you carried out your work to meet your work’s objectives, your results and interpretation of those results, and the significance of your work to the body of knowledge, as well as to the society.

Consequently, do not submit a shoddily prepared thesis, thinking you would be able to explain any shortcomings or confusions during your viva. Your examiners, reading your shoddy thesis, will not be kind in their evaluation of your work. I have witnessed such students who, because they were in a hurry to return home, submitted a shoddy thesis, only to almost fail their viva. One PhD candidate, ignoring my warning that his thesis was not ready, still submitted his subpar thesis. Unsurprisingly, his viva committee was merciless in their evaluation of his work. Although he was very, very fortunate to receive an “Accepted with Major Revisions” outcome from his viva committee, he, at the end, still failed his PhD because he was unable to make the corrections as demanded by his viva committee. His required thesis corrections were just too much for him to handle, as they would require a deep overhaul of his work methodology, analysis, and reinterpretation of results.

Some universities make it a confidential affair on the appointment of your viva examiners. But if your university allows even you to suggest possible candidates to be your examiners—or if the appointment of your viva committee is not confidential, then I recommend that you find out as much as you can about the research background and the current research of your examiners.

Read up on the publications of your examiners (even if they are not directly related to your work) to determine their research interests. This will help you anticipate the kind of questions your examiners would likely ask you. Finding out their research interests would also help you identify specific areas in your work that would strongly draw interest from your examiners; thus, these areas would likely be more rigorously examined by them.

Are external examiners more critical or stricter than internal examiners? Not always. Internal examiners can examine your work in a more rigorous manner than your external examiners. But do not be surprised, if during your viva, you find that it is the other way around. In other words, treat both your internal and external examiners as equals, that no one is automatically more lenient or stricter simply on the basis of being appointed as the internal or external member in your viva committee.

Lastly, you should not contact any of your examiners before your viva such as to get tips or hints on their questions or even to get their overall impression on your work.

Your oral presentation

Just before the Q & A (question and answer) session begins in your viva, you will be required to make an oral presentation in front of your viva committee for usually 20 to 30 minutes.

The purpose of this oral presentation is to highlight the problem and justification of your work, the objectives of study, how your work was conducted (methodology), key findings and your interpretation of them, and finally, the conclusions.

Keep your oral presentation to the allocated time. Do not exceed the time, or try to speak fast (or flash the slides forward too fast for the examiners to read) just to keep your presentation within the time. As a rule of thumb, one slide takes an average of one minute to present, so a 20-minute presentation means you need no more than 20 slides in total. As usual, practice your presentation prior to your viva, preferably also in front of your supervisory committee to obtain their feedback.

You do not have to present all your findings in this oral presentation, merely the key findings or those that are most important and interesting. There is no need to have a separate literature review section in your presentation but to incorporate this section as you discuss your results. For instance, as you present your study’s results, you can cite references or discuss previous studies that support your results and your interpretation of them.

One common mistake among postgraduate students is to present the conclusion of their study as if it is a summary or an abstract. The conclusion section is the “take home message” from your work. One way to present a conclusion section during your viva is to imagine trying to succinctly answer a news reporter on live TV who is sticking a microphone at your mouth and asking you, “What did you find out, and why should we care?”

Although examiners often wait until you have finished your oral presentation before asking you any questions, some examiners may interrupt you to seek further clarification or give their comments on any of your slides. Expect to be interrupted during your presentation; there are no rules that examiners must stay quiet during your oral presentation.

Know your thesis, Please

The days leading up to your viva date are crucial. Use this time to mentally prepare yourself and to reacquaint yourself with your own thesis.

Prepare for your viva by familiarizing yourself with your own thesis, reading up on your examiners' work (if possible), and reading up on scientific literature (c) torwaiphoto @ fotolia.com

Prepare for your viva by familiarizing yourself with your own thesis, reading up on your examiners’ work (if possible), and reading up on scientific literature related to your research. (c) torwaiphoto @ fotolia.com

I find it irritating when viva candidates sometimes behave as if their thesis had been written by someone else. These candidates, for instance, forget about what they had written or forget the location of some text excerpt, equation, figure, table, or chart in their thesis.

Some viva candidates even become surprised by what they had written or become perplexed by their own written explanation in their thesis.

Of course, no one expects you to have a photographic memory of your thesis, but at least, please be aware of what you have written and generally know what is located where in your thesis. Of all people, you the author of your thesis should be most familiar with your own written word!

Supporting documents and note-taking

Bring your notebook (laptop) into the viva room. Ensure your notebook contains all the necessary supporting documents (such as photos of your work or results, articles, saved web sites, etc.). Depending on where the discussion leads to during your viva, you might find it most useful to show one or more of your saved supporting documents to your examiners.

Other than your laptop, you should also bring a pen and scratchpad for taking down comments, suggestions, and corrections from your examiners. Your notes will ensure you do not inadvertently forget any corrections or improvements your examiners want you to do for your thesis. Even if you have an outstanding memory capacity, jot down the comments from your examiners to at least show them that you are treating their comments as important and useful.

Question and answer (Q & A)

Whatever questions your examiners ask you, their questions are intended to evaluate you based on the following nine main areas:

  1. Is the problem of study clear and important?
  2. Are the study’s objectives clear, achievable, and sufficient?
  3. Are the methods used sufficient?
  4. Are the results clear, sufficient, and important?
  5. Can the results be explained clearly and sufficiently?
  6. Have the study’s objectives been achieved?
  7. Is the scientific work robust/rigorous?
  8. Is the level of work sufficient for the awarded degree?
  9. Are the presentation of work and writing in thesis adequate?

In other words, to pass your viva, your examiners must be convinced by the following:

  1. that your research problems are clearly identified and important
  2. that your research objectives are adequate and correct
  3. that you have used the correct methodologies
  4. that you have correctly interpreted your results
  5. that your research objectives have been achieved
  6. that your research scope and workload are sufficient for your awarded degree
  7. that the presentation of work in your thesis is clear, understandable, correct, accurate, professional, and follow all formatting guidelines
  8. that your work is original

Anticipate the questions your examiners could ask you. Be honest with yourself. Identify weaknesses in your study and prepare good answers on why and how these weaknesses exist. No research is perfect, but it is important to identify the weaknesses in your research and understand how they impact the validity of your work.

Anticipate the questions you could be asked during your viva. (c) takasu @ fotolia.com

Anticipate the questions your examiners could ask you and come up with good answers for each of them. (c) takasu @ fotolia.com

Examine your research methodology. Why did you use this particular method to measure a given parameter? Why this particular method and not another, for example? Why did you design your experiment in such a way? Justify your research design, even if it was based on an accepted or standard method. Deeply understand your methodology, in particular of your statistical analysis and design. Why the large error bars?

Examine your research results too. Do your results make sense, and are your results dependable? Can you explain the observed trends in a scientifically robust manner? Do your results agree with previous studies? If not, can you explain why? And how do your results answer your research objectives?

If you have published any papers about your research during your postgraduate study, consult the feedback from your paper reviewers or referees. Their feedback is immensely useful because these are the types of questions, comments, criticisms, and suggestions you could be asked during your viva too.

As discussed earlier, knowing who are your examiners will be beneficial to you, so you can better anticipate their questions and pinpoint areas in your work that would likely be more intensely examined.

How to receive questions and answer them

The mood during your viva is extremely important. You want a cordial, diplomatic, and conducive viva environment. You want a discussion but not a heated one.

Your viva should be an environment of healthy, meaningful, and useful discussion. (c) Minerva Studio @ fotolia.com

Your viva should be an environment where healthy, meaningful, and useful discussion about your research take place. (c) Minerva Studio @ fotolia.com

 

Avoid heated, angry discussions at all cost. You may win the battle but lose the war -- badly. (c) Minerva Studio @ fotolia.com

Avoid heated, angry discussions during your viva at all cost. You may win the battle of an argument but lose the war — and badly. (c) Minerva Studio @ fotolia.com

Some viva candidates can adopt a “siege mentality” especially under the intense questioning by their examiners. So, these viva candidates become confrontational, angry, or impatient at their examiners. But even examiners cannot entirely be faultless. Some examiners, due to their personalities, can appear aggressive, authoritarian, or confrontational, even if that was not their intention. If you encounter such a personality, relax. Do not be offended or become confrontational; you will only make things worse for yourself. Instead, take a relaxed, thoughtful, and non-confrontational response to such personalities, and you will often find the viva discussion rebalances itself back into a more diplomatic tone.

Do not be defensive or confrontational with your examiners. Do not take their criticisms or suggestions as a personal attack (c) pathdoc @ fotolia.com

Do not be defensive or confrontational with your examiners. Do not take their criticisms or suggestions on your research as a personal attack. Your examiners can add value to your research. (c) pathdoc @ fotolia.com

Be confident, eager, and enjoy the opportunity to share your work to the examiners. Receive any questions gladly, without being defensive or offended. Do not rush to answer the questions, but take time to consider before answering. Do not take any criticism, comments, suggestions, or feedback about your work as a personal attack. No research, even if carried out by a world-renowned scientist, is free of weaknesses or faults. No research is perfect or complete such that no more suggestions can be offered to improve it.

Realize that your examiners, rather than being your taskmasters or saboteurs, are actually your impartial evaluators to help improve your research work. You will find that your examiners’ comments, criticisms, and suggestions will add value to your work. Your examiners can give you a deeper insight of your work or make you realize of other possibilities that could explain your research results. Your examiners may offer suggestions of additional or more detailed data analysis that add value to your research.

If you do not know the answer to a question, it is best that you just admit it – but then still try to answer it based on your most intelligent guess. Your examiners will respect your effort. Avoid lying, being evasive, or “talking in circles”.

If you have difficulty answering any question, avoid blaming the whole world and everyone else except yourself. Avoid blaming your supervisors, university, data, resources, or situation. You need to show competency, intelligence, and maturity in your work. So, merely blaming others or your situation for your work difficulties, especially without showing sufficient problem-solving efforts, will not reflect kindly on you.

Also avoid giving common excuses like “this is beyond the scope of my study”, “this problem occurs in all studies”, or “it’s like that” without first giving a convincing argument to support such excuses.

Handle misunderstandings or misinterpretations

So, what happens if you think your examiners have misinterpreted or misunderstood your work? Do not be so quick to blame your examiners. You need to ask yourself first: “Was my explanation clear in my thesis?” Some misunderstandings or misinterpretations can occur due to unclear, misleading, or even missing explanations on your part. You of course did not intend it as such, but before blaming your examiners, you need to re-look at your thesis to evaluate if their misunderstandings or misinterpretations were caused by you.

Of course, your examiners are not perfect. Your examiners may misunderstand some aspects of your work due to their careless reading or simply because of their absent mindedness. This can happen if they are reading a thesis on a large, multi-faceted research work. Your examiners often do not read your thesis in a single continuous session. They may, for instance, have read your chapters 1 to 3 two weeks ago, chapter 4 one week ago, and only complete reading your thesis this week. Some facts you had earlier written could have been forgotten by the examiners.

Whoever fault it is, it is important to explain well to your examiners to clear up any misunderstandings or misinterpretations. If your writing was indeed unclear, then merely explain you would rewrite that section of text. If your examiners had read your work wrong, then kindly explain their mistake.

Outcome of your viva

Your examiners will complete their evaluation of your thesis and your ability to defend your work and then decide to give you one of the following viva outcomes: 1) Accepted with distinction, 2) Accepted with minor modifications, 3) Accepted with major modifications, 4) Oral re-examination (Re-viva voce), 5) Re-submission of thesis, 6) Re-submission of a PhD thesis as a Master’s, or 7) Fail/reject.

Different universities will have slightly different viva outcomes than that presented here. One local Malaysian university has an additional viva outcome: Accepted with moderate modifications.

Whatever the set of viva outcome your university offers, you must aim for either viva outcomes (1), (2), or (3). It is important that your viva receives the “Accepted” outcome. This means there are only corrections to be done on your thesis before you would be awarded your degree (provided of course you satisfactorily do the corrections according to the examiners’ recommendations). You need to consult your university on the exact definition for each of the viva outcome, and I would not discuss them here.

All other viva outcomes (i.e., those without the “Accepted” outcome) are bad, really bad, because they involve a very major re-work of your study and a re-sit of your viva. It could also mean a downgrade of the offered degree (instead of a PhD, you would be awarded a Master for your work) or, the worst outcome, you fail.

If your viva is one of those with the “Accepted” outcome, you would be given some time to make the corrections. Again, consult your university on exactly the time limit given (i.e., different universities set different time limits). Stay within the time limit given to make your corrections.

Do not pick and choose which corrections to do – all recommended corrections must be done. Read the examiners’ reports for the list of corrections to make, as well as check the examiners’ copies of your thesis. The latter is because your examiners will also write along the margins inside your thesis on the corrections you need to do.

You can see or contact any of your examiners to resolve issues like contradictory corrections suggested by two or more examiners or unclear corrections or comments. If you disagree with the corrections, you need to give your reasons for your disagreement, but again, as discussed earlier, you need to ask yourself first if your writing, explanation, or presentation was unclear, leading to that misunderstanding.

Failing your viva – but only if you want to!

You might be surprised to learn that failing your viva is actually pretty difficult – unless you intentionally, ill-advisedly, or recklessly want to.

Before you even submit your thesis for examination, your supervisors must collectively agree that your thesis and research work are able to hold up to the intense scrutiny of your viva. In other words, you only head into your viva with your supervisory committee’s blessings. If they feel your research work are not up to the standard of your degree, you would not be given their permission to head into your viva.

So, do not “arm-twist” or “emotionally blackmail” your supervisory committee into letting you head into your viva despite their objections or reservations about your viva readiness. Unfortunately, there are some problematic or stubborn viva candidates who do this: for whatever reasons, adamantly convince their supervisory committee to allow them to head into their viva – and this is where these viva candidates face a torrid time during their viva, with some even failing or unable to make the recommended corrections.

But for most postgraduate students, their viva, though nerve-wrecking, often left a good memorable experience because their vivas had allowed them to share and discuss their work with their examiners in a cordial and professional manner.

It is normal to be nervous going into your viva. But your nerves are good. They keep you alert and ready for questions coming your way. As you grow in confidence during your viva, you will find yourself relax and even enjoy the discussions about your work.

Good luck!

interview with one viva student

To end my article, here’s a brief and simple video interview with one of my Master’s student, Abba Nabayi, who recently went for his viva on April 5, 2016 at Uni. Putra Malaysia.

Abba Nabayi, my Master's student, who recently had his viva at Uni. Putra Malaysia on April 5, 2016. Find out how he did by watching the brief video interview!

Abba Nabayi, my Master’s student, recently had his viva at Uni. Putra Malaysia on April 5, 2016. Find out how he did by watching the brief video interview just before and after his viva!

The following video is just prior and after Abba’s viva.

 

References

  1. “How to defend your dissertation” video series by Dr. Marche
  2. “Your viva voce exam” by Uni. of Leicester
  3. “The viva voce exam” by Uni. of Sheffield



Should we, religious Malaysians, indoctrinate or teach our children religion or protect them from it?

We as parents want the best for our children. We strive hard to provide the resources and opportunities to our children to discover and build up their strengths. Our hope is that our children become meaningful contributors to the society, who make full use of their lives to become a positive change and influence on others and on the society, country, and even the world.

One skill our children need to master is in critical thinking. Few parents would disagree on this. But most parents have an incomplete or erroneous understanding about critical thinking. Critical thinking is not merely about acquiring knowledge, but also, in large part, about the process of analyzing the acquired knowledge. This skill involves questioning ideas, even those that are accepted as norm. Critical thinking involves breaking down a problem into simpler chunks to be analyzed. It also involves looking at a problem from different perspectives and coming up with good solutions.

Critical thinking is an essential skills our children must muster, without which our children have difficulty distinguishing facts from nonsense and a poor understanding of the world and their surroundings (c) Creativa Images @ fotolia.com.

Critical thinking is an essential skill our children must muster, failing which our children have difficulty in distinguishing facts from nonsense, reality from fiction, and have a poor understanding of their surroundings and the world (c) Creativa Images @ fotolia.com.

Critical thinkers are open-minded, who are mindful of alternatives, curious, well-informed, and good judges of credibility. Such thinkers may be open-minded, but they are also skeptical. They are cautious about immediately accepting norms, assumptions, and reasons about a given stance or a belief system. They are cautious about drawing any conclusions without rational reasoning first.

Critical thinking cannot be reconciled with religious thinking because the latter involves accepting superstitions (such as “magical events”) that violate physical laws and causal relationships.

So, though we as parents say we want our children to think critically, yet many of us allow religious beliefs to be implanted and inculcated into our children, often with our blessings. In other words, we parents may be skeptical and protective of our children’s minds when it involves unusual claims, ideas, or hype, yet religion often gets a free pass to influence our children. Why is that?

Wendy Thomas Russell, author of “Relax, It’s Just God”, says this is because many of us look to religion for answers to four fundamental questions about life, which are: 1) how did the world come to be, 2) what happens when we die, 3) how should we behave, and 4) why do bad things happen? In other words, many of us turn to religion to answer questions related to the purpose in life, morality, and justice.

"Relax, It's Just God" by Wendy T. Russell discusses on what parents should do about teaching religion to their children.

“Relax, It’s Just God” by Wendy T. Russell discusses on what parents should do about teaching religion to their children.

And we Malaysians are among the most religious in the world. About 80% of us are religious, and nearly two-thirds of those religious are fundamentalists: people who adamant that their religion is the one (and only one) true religion. And while people in the rest of the world would generally become less religious as they grow older, we Malaysians are religiously devout throughout our lives.

We Malaysians are highly religious, among the highest in the world. 80% of us are religious and two-thirds of the religious are fundamentalists. And religion is central to many Malaysian lives. (c) Tan Kian Khoon @ fotolia.com

We Malaysians are highly religious, among the highest in the world. 80% of us are religious and two-thirds of the religious are fundamentalists. And religion is central to many Malaysian lives. (c) Tan Kian Khoon @ fotolia.com

There is a risk children brought up in a pious religious background may affect their critical thinking skills. Scientific studies, in particular by Corriveau and her research team in 2015, have shown that there exist differences in the perception of reality between religious and secular children (below 6 years). Secular children were reported to have a keener sense of reality, who, for instance, understood that magical events in any story they read could never happen in reality. In contrast, religious children had more difficulty differentiating reality from fiction. For instance, they were more receptive that magical events in stories they read, whether these stories had any religious background, could actually happen in real life.

Nonetheless, one important caveat from such studies is all children they studied were all aged below 6 years. Children’s perceptions of reality, including those from religious background, would likely improve as the children develop more complex critical thinking skills, especially with increasing education in science at schools.

No doubt that even religious parents do value critical thinking in their children, but these parents would not hesitate to expose their children, even at a very young age, to religion. And the religion to which these children are exposed is very often only a single religion – their parents’ religion. Rarely do religious parents  expose their children to other religions. Religious parents indoctrinate their children, whether wittingly or unwittingly, that their religion is the only one worth believing. All other religions are often claimed to be false, inferior, or even evil and thus should be avoided.

Many religious parents expose and teach only one religion to their children, leading to a myopic understanding of other people from other faiths (and those without any) (c) Distinctive Images @ fotolia.com.

Many religious parents expose and teach only one religion to their children, leading to a myopic understanding of other people from other faiths (and those without any) (c) Distinctive Images @ fotolia.com

Parents who are non-religious often also expose their children to religion. These parents fear that if their children are not taught about religion, then their children risk leading immoral, wasteful, and aimless lives. They also fear that depriving their children of religion could deprive their children of spiritual guidance and comfort in times of trouble.

At the other end of the extreme, secular or atheist parents fear of indoctrinating their children with religion, or that teaching religion to their children would backfire by making their children religious instead and believe in superstitions at the cost of rational thinking. So, some atheist parents completely shield their children from religion, having no patience and zero tolerance—a “religion blackout”—on any religious ideas.

Should we instead protect our children from superstitious, religious thinking? (c) Thomas Perkins @ fotolia.com

Should we instead protect our children from superstitious, religious thinking? (c) Thomas Perkins @ fotolia.com

So, what are we as parents to do?

Regardless whether we are religious, our children should still be taught religion, but not just on a single religion but on a variety of them. The idea of teaching our children religion is not to convert our children to a particular religion but to develop religious literacy in our children, that our children have a much greater understanding about how religion plays an important role, past and present, in the society, arts, media, music, literature, politics, and building architecture.

By having greater religious literacy, our children learn about differences between groups of people and why people behave as they do. Through greater religious literacy, our children learn about tolerance and appreciation on human differences. Religion may not be important to us or to our children on a personal level, but it is important to some people, so it is important our children understand this, regardless whether our children believe in any of their religious teachings.

Through greater religious literacy, our children would also better understand what drives religious violence, hatred, racism, intolerance, sexism, and terrorism in the world today. If our children are ignorant about religions or are myopic to only a single viewpoint of one religion, then it is difficult to get our children to understand why things happen as they do in the society and world.

So, yes, we should teach religion – not just one but many religions – to our children. But we as parents need to do this on our own, without relying on others or hoping the Malaysian government would suddenly become progressive to allow the teaching of comparative religions at schools. The latter, even if well-intended to promote greater tolerance and harmony among people, would probably be abused by certain people who would indoctrinate our children. Recall that many Malaysians are highly religious, and it is easy to see how a well-meaning policy of teaching multiple religions at schools would lead to abuse, discrimination, and bias by teachers with their religious and personal agendas.

As Wendy Russell, in her book “Relax, It’s Just God”, says, “Religion isn’t rocket science.” Every parent, regardless of religiosity (or lack of it) is able to impart the generalities about any religion: on its beliefs, traditions, practices, and celebrations. Such information can easily be obtained from the web. Children books about religions of the world, free from judgement, are also available, such as those listed in Wendy Russell’s book.

Our children are our precious gifts. But our children should not be miniature versions of us but who would blossom into mature and independent individuals, capable of using critical thinking based on reason to decide for themselves on their belief system. Our children should derive their conclusions about their beliefs without coercion, indoctrination, or having been force-fed by us with our own belief systems. But to achieve such goals, our children’s critical thinking skills need to be honed. Having poor critical thinking means our children will have difficulty separating facts from nonsense or too accepting of all sorts of beliefs including dubious ones.

Inculcate a strong critical thinking based on reason in our children. Teach them to think, reason, and question everything (c) DragonImages @ fotolia.com.

Inculcate a strong critical thinking based on reason in our children. Teach them to think, reason, and question everything, even accepted norms. Critical thinking is a priceless gift we can endow our children (c) DragonImages @ fotolia.com.

As the late Carl Sagan said, “It pays to keep an open mind, but not so open your brains fall out”.

References

  1. Corriveau, K.H., Chen, E.E., Harris, P.L. 2015. Judgments about fact and fiction by children from religious and nonreligious backgrounds. Cognitive Science, 39: 353-382.
  2. Russell, W.T. 2015. Relax, It’s Just God: How and Why to Talk to Your Kids About Religion When You’re Not Religious. Brown Paper Press, California.
  3. Stern, M.J. 2014. Is religion good for children? The Slate (link).

 




Why we want sex with beautiful people

We can’t help ourselves. Why do we like and favor beautiful people? It’s because we want sex with them. Crude answer, no doubt, but think about it.

What if I told you that the underlying motive for all human behavior, whether in politics, religion, and socioeconomics is about reproductive success, that everything we do, either directly or indirectly, whether we realize it, is ultimately about passing on our genes to the next generation?

If what evolutionary psychologists are telling us are correct, then all our behavior are at the end governed by sex and mating. Reproductive success is the purpose of our biological existence, so they say. We live so we can successfully pass on our genes to the next generation. Sure, we may say we work hard to earn that job promotion or higher salary. But underlying our justification, evolutionary psychologists exert, is actually about creating a more conducive environment that ensures our genes are more successfully passed on to our children and to theirs and so on. In the same way, we may say we ought to choose our life partner with great care. Someone to love, so we say, someone we can grow old with. Whatever our reasons, ultimately, choosing the right life partner (or partners) ensures our genes are successfully and effectively passed on to the next generation.

And to ensure good—not inferior—genes have the greater chance of surviving over many successive generations, nature relies on beauty. Beautiful, good looks are often a sign of good health and fertility in an individual, so evolution has conditioned us to prefer certain looks. This is why we immediately recognize beautiful people.

Admit it, we like beautiful things. We like beautiful houses, beautiful gardens,  beautiful sceneries, and beautiful cars. We prefer pedigree to mongrel pets. Even movies are somehow better when their main actors are beautiful. And, if the world was our oyster, we would likely have more than one life partner, either simultaneously or serially, and all our partners would be strikingly beautiful. And, yes, we would rather have sex with beautiful people than with plain looking people and certainly not with ugly people. Holding everything else constant, we prefer our children to look beautiful too.

Some scientists have established several criteria that defines female beauty such as having a waist-to-hip ratio of about 0.7 and even how far apart must the eyes be from each other (i.e., optimally 46% of facial width) and how high the eyes should be above the mouth (optimally 36% of facial length). But no one needs to whip out a ruler or measuring tape to determine whether someone is beautiful. We immediately recognize beauty when we see it. More than three decades of research have shown that our beauty detection sensor is innate, built in into our DNA. How do we know this?

Studies have shown that even babies as young as one week to three months old will look more intently and longer at pictures of attractive faces. Infants, twelve months old, were observed in one study to play more and were less distress and less withdrawn when interacting with adults wearing attractive masks than those who wore unattractive masks. Even these infants played more with facially more attractive dolls than those with less attractive faces.

Not only do we immediately recognize beautiful people, but we are also compelled to want beautiful people to be around us. Social experiments carried out by ABC’s 20/20, an investigative journalism for TV, for instance, revealed that attractive people are more likely to receive help from strangers (whether of the same or opposite gender) than less attractive people. Even attractive waitresses earn higher tips, as much as 50% more, than their less attractive colleagues. Such trends are not isolated because other experiments, carried out in a more controlled and scientifically rigorous manner, have observed similar trends, that attractive people often have the upper hand over their less attractive counterparts.

Whether we care to admit it, especially in today’s age of political correctness, being physically beautiful can put us in a significant advantage over those who are plain looking. Being good looking, simply put, makes us more sexually attractive, and this in turn promises us great rewards.

In the animal kingdom, peacocks with large, showy tail get the peahens. The larger and the more showy the tail, the more chances the peacocks would mate and the more offspring they would have in their reproductive lifespan. This is an astonishing phenomenon considering having a large, showy tail carries enormous cost and risk to the peacock. Having such an elaborate tail is costly in terms of resources needed to maintain such a tail, and it also endangers the life of the peacock because such a tail can be more easily seen by predators. But the reward is enormous because the owner of such a large, showy tail gets to mate and pass on its genes to the next generation. Such animal signaling is very common in the animal kingdom. The mating dance of Birds-of-Paradise is only one of the many more examples where the more extravagant and choreographed the dance, the more chance the male bird will successfully mate.

A great cost and risk to its life, the larger and the more showy the peacock tail, the more likely the male bird to mate and pass its genes to the next generation (photo from wired.com).

Rewards of being beautiful: At great cost and risk to its life, the larger and the more showy the peacock tail, the more likely the male bird gets to mate and pass its genes to the next generation. Likewise, our physical beauty makes us sexually attractive with potentially great returns (photo from wired.com).

But animal signaling is not just constrained to the animal kingdom. We too behave in such a way. Being physically attractive is our version of animal signaling. In the intense competition of the workplace, first impressions do matter. Our face and our body are our ambassadors, instantly recognizable, whether we want it, because being good looking quickly conveys our potential worth: that we are competent, talented, trustworthy, intelligent, and superior. Our resumes or certificates can only take us so far, so says Allison Wolf, British economist and author of “The XX Factor”. Our worth will also include face-to-face evaluations—and being physically attractive, in addition to how we dress, can heavily tip the balance in our favor.

Life, so it appears, is unfair. Daniel Hamermesh, author of “Beauty Pays: Why Attractive People Are More Successful” remarked that while many people are concern of discrimination over race, religion, and gender in the workplace, favoring attractive people over others is a much lesser known but just as important form of discrimination. Even employers who say they would not discriminate over people’s appearance will unwittingly go ahead and do so, as studies revealed. This is because, as stated earlier, we are all hardwired to respond favorably to attractive people. According to Gordon Patzer, author of “LOOKS: Why They Matter More Than You Ever Imagined”, we tend to find attractive people more talented, kind, honest, and intelligent than less attractive people.

The victims are then the unattractive women and men, who according to Allison Wolf, tend to suffer just as much as each other in the workplace. However, it is the obese women, in particular, who tend to suffer more for their weight than men for their short height, according to a survey, cited by Allison Wolf in her book, of the labor market in the US and UK.

To further rub salt into the wounds of unattractive people, their attractive counterparts really do tend to be smarter, richer, and more successful.

The research led by Satoshi Kanazawa from the London School of Economics, for example, studied more than 52,000 people in the US and UK over many years and found that attractive men and women scored respectively 13.6 and 11.4 points higher in IQ tests than the sample average. Furthermore, a 1994 study by Hamermesh and Biddle observed a positive relationship between attractiveness and the labor market earnings across a variety of occupations. Attractive individuals, they found, earned 5% more than those with average looks, and less attractive individuals earn 5 to 10% less.

Attractive people tend to more successful in their careers than their less attractive colleagues (© leungchopan @ fotolia.com).

People tend to see attractive people as being more intelligent, talented, confident, and having more positive beliefs. Consequently, attractive people tend to be more successful (for instance, earning higher salaries) than their less attractive colleagues at work (© leungchopan @ fotolia.com).

Other surveys since then have observed the same trends. A US survey cited by Catherine Hakim in her book “Erotic Capital: The Power of Attraction in the Boardroom and the Bedroom”, for instance, found that good looking lawyers earned 10 to 12% more than less good looking lawyers. Similarly, a survey among MBA graduates found up to 15% difference in earnings between the most and least attractive people in the group. Even in courts, more attractive defendants tended to receive more lenient sentences (or even escape conviction entirely) or more likely to win their case and get larger financial settlements.

Taller men are perceived to be more attractive and have greater strength, energy, and resources. No surprise then that a study by researchers at the University of Florida, University of North Carolina, and University of Pittsburgh found that taller men tended to do significantly better in the labor market than shorter men, after controlling for differences in education, class, race, and general health.

However, that good looking people tend to be smarter, more confident, and more successful than their less attractive people could be a result of a so-called cumulative effect, according to Lisa Walker and Tonya Frevert, two social psychologists from the University of North Carolina. Because attractive people tend to be looked upon favorably by others, they are often given more opportunities and challenges in which to cultivate and demonstrate their talents, knowledge, confidence, and other positive beliefs. So, it is perhaps not so much that attractive people are innately better than the less attractive people, but more because more doors are opened to attractive people. Less attractive people are simply not given as many opportunities than their attractive counterparts to excel.

But what exactly constitutes beauty? What makes a person beautiful? One popular misconception is that the media defines beauty for us based on some arbitrary standards, that girls, for instance, like to be slim and dye their hairs blonde because the media has arbitrarily defined slim blonde girls as beautiful. But this is simply untrue. Beauty as portrayed by media and ads are the consequence rather than the cause of what people find as beautiful. Although different cultures have different standards of beauty, there is a great deal of overlap or similarities between these various so-called beauty standards.

A popular misconception: The media and ads do not set the standard of beauty on us. What we see on the media and ads are a consequence of what we ourselves desire in beauty (photo from koreanindo.net).

A common misconception that the media and advertisements set or enforce the standard of beauty on us. What we see on the media and ads are instead a consequence of what we ourselves desire in physical beauty (photo from koreanindo.net).

A series of studies in the 1980s and 1990s revealed that regardless of culture, race, geography, and level of exposure to Western media (or its lack thereof), people remarkably agree with one another on whom they find as attractive and whom they do not. When photos of Victoria’s Secret lingerie models were shown to the men from the Yanomami tribe of the Amazon rainforest, for instance, the men remarked that these models were moko dude (or ‘perfectly ripe’ for mating).

Different cultures have very similar ideas on what constitutes beauty. Yamomami men, when shown photos of Victoria's Secret lingerie models remarked that these models were 'perfectly ripe' for mating (photos from ).

Different cultures have similar views on what constitutes beauty. Yanomami tribal men, when shown photos of Victoria’s Secret lingerie models, remarked that these models were ‘perfectly ripe’ for mating (photos from Ariana Cubillos/AP and Victoria’s Secret).

In 1989, David Buss from the University of Michigan, surveyed more than 10,000 male preferences of females across 37 highly diverse cultures in 33 countries. Regardless if the males were from urban, Western societies or from traditional societies such as the Ache of Paraguay or Shiwiar of Ecuador, males consistently place a high premium on the physical attractiveness, in particular on youth, of potential female mates. On average, men all over the world found women most suitable as mates at 25 years of age. Studies by Langlois and his associates at the University of Texas in 2000 and in particular those spanning the late 1980s to 1990s carried out by Michael Cunningham from the University of Louisville consistently showed that people within the same culture or across different cultures were still able to agree with one another about whom was attractive and whom was not. Work by Cunningham and his colleagues showed that men found female faces with the following characteristics to be physically very attractive: relatively small chins, large eyes, high cheekbones, and full lips.

Controversial evolutionary psychologist, Satoshi Kanazawa, co-author of “Why Beautiful People Have More Daughters”, went further by distinguishing six characteristics that define the ideal image of female beauty: youth, long hair, small waist, large breasts, blonde hair, and blue eyes, and there is an evolutionary logic, though some contentious as admitted by Kanazawa, to each of those six characteristics. Women with long hair, small waist, large breasts, and blonde hair reflect youth and good health, and in turn high reproductive value (the expected number of children a woman could have over her reproductive age) and high fertility (average number of children a woman would have at any given age).

Ideal female beauty: Blonde and long hair, high cheekbones, large and blue eyes, petite nose, and small chins (© kjekol @ fotolia.com)

Ideal female beauty I: Blonde and long hair, and blue eyes (© kjekol @ fotolia.com)

 

Ideal female beauty: small waist and large breast. All these characteristics signify good health, youth, and peak fertility (© kjekol @ fotolia.com).

Ideal female beauty II: Small waist and large breasts. All these characteristics signify youth, good health, and peak fertility (© kjekol @ fotolia.com).

A 2004 study led by Grazyna Jasienska from the Jagiellonian University, Poland, for example, showed that Polish women, aged 24 to 37 of age, with small waists and large breasts have greater reproductive potential, as indicated by their higher levels of reproductive hormones, over those with larger waists and smaller breasts. And the light blonde hair of young girls tend to turn darker and eventually into brown hair as the girls mature into older women. So blonde hair is often an indication of a woman’s age. A woman who still retains a blonde hair often means she is still young and at peak fertility. Similarly, women with long hair indicate good health. Older or unhealthy women tend to have shorter and less lustrous hair due to less than optimal health. Consequently, men find women with long hair, especially if the hair is lustrous, to be highly attractive because such women radiate good health and good fertility.

Even without looking at a woman's face, hands, or body, we can often tell accurately if the woman is young, healthy, and good looking merely by looking is she has long lustrous hair (photo from hairfinder.com).

Even without looking at a woman’s face, hands, or body, we can often tell quite accurately if the woman is young, healthy, and good looking merely by looking if she has long lustrous hair like in this photo (photo from hairfinder.com).

But why we find people with blue eyes to be  most attractive is still open to conjecture, but one possible reason is the color of the eye is related to how easily we can tell the size of the eye pupil. Our eye pupil increases in size when we see something interesting or captivating. See an attractive woman and our eye pupils dilate. So, compared to dark colors like dark brown or black, blue is the brightest color for human iris and such, blue makes it the easiest for us to tell the size of the eye pupil and thus, if the person is attracted to us. Perhaps not a coincidence then that studies in the 1960s and 1970s have found that some people describe people with light brown eyes as ‘mysterious’, but those with dark brown eyes are instead disliked by many, presumably because such a dark eye color makes it difficult for us to ‘read’ the emotions of the other person.

Blue eyes are most attractive perhaps because blue is the lightest color that allows us to 'read' people (photo from cnn.com).

Blue eyes are most attractive perhaps because blue is the lightest color that allows us to ‘read’ people’s interest in us (photo from cnn.com).

Other properties define beauty too, one of which is bilateral symmetry of a face. People find symmetrical faces more attractive because facial symmetry (where the left side of the face looks the same as the right side) indicates good genetic health and fertility. Ill people or people with genetic disruptions or those born in environments with high exposure to parasites, pathogens, and toxins tend to have less than symmetrical faces and are often regarded to be less than attractive.

Having mixed parentage may also endow us with exotic, good looks. Mixed or interracial marriages are an effective way to breakdown racial barriers and racism, but with several more added benefits. Children of mixed parentage are often attractive, sometimes much more so than their parents. Such good looks are a consequence of hybrid vigor or heterosis, a theory first put forth by Darwin in 1876. Heterosis is the tendency of a crossbred offspring to have enhanced traits or better genetic quality than both its parents. Canadian actor, Kristin Kreuk, among many others, is one such example of being stunningly beautiful due to mixed parentage. Even the facial features of Americans are expected to change by 2050 due to increasing popularity of mixed marriages there.

The stunningly exotic, good looks of Canadian actor, Kristin Kreuk, the product of mixed parentage, a Dutch father and a Chinese mother.

The exotic and stunning good looks of Canadian actor, Kristin Kreuk, the heterosis product of her mixed Dutch and Chinese parents (photo from CBS Portraits).

 

The future face of Americans by 2050 as mixed marriages become increasingly frequent in the US (photo from National Geographic).

The future ‘average’ face of an American by 2050 as mixed marriages become increasingly frequent in the US (photo from National Geographic).

Recently in 2015, researchers from the University of New South Wales, Australia used an innovative approach to mimic evolutionary selection of female beauty. Using computerized images of female bodies and with the help of more than 60,000 online participants, female bodies were evolved over eight generations. Evolution of successive generations of female beauty was shaped by ratings given by online participants based on how much they liked the current generation of female beauty. At the end of the experiment, these Australian researchers found that female beauty evolved as we had expected, that female shapes considered beautiful are those characterized by small waists, long legs, and large breasts. Nonetheless, these researchers found that it is not any given female trait that makes a female shape beautiful but rather how well the various beauty traits are collectively expressed. In other words, it is not so much that large breasts makes a woman beautiful, but the combination of two more beauty traits such as the large breasts, small waists, and long, slender legs that ultimately makes a woman highly desirable.

Being beautiful, however, has its downsides. While attractive men can be considered as better leaders, sexist prejudice can work against attractive women, as opined by Lisa Walker and Tonya Frevert, social psychologists from the University of North Carolina. So, while attractive women can have the upper hand over their less attractive counterparts in certain levels of jobs, attractive women can be discriminated against in acquiring high-level jobs that require authority and strong leadership. Moreover, jealousy can occur among attractive people in the workplace, causing discrimination or loss of opportunities.

Justin Trudeau, the recently elected Prime Minister of Canada. He is popular, not only for his liberal and progressive ideas, but also for his youth and good looks. But had Justin Trudeau been an attractive woman instead, the outcome of 'her' election could have been quite different. Sexist prejudice can discriminate attractive women from holding high level posts that require authority and strong leadership.

Justin Trudeau, the recently elected Prime Minister of Canada. He is hugely popular, not only for his liberal and progressive ideas, but also for his youth and good looks. But had Justin Trudeau been an attractive woman instead, the outcome of ‘her’ election could have been quite the opposite. Sexist prejudice can discriminate attractive women from holding high level posts that require authority and strong leadership.

But our perception of beauty is changing. Social norms of what should be considered beautiful are now emphasizing more on social fairness, sensitivity, and realism.

Falling sales of Barbie dolls by 20% between 2012 and 2014 has seen Mattel, the doll maker of Barbie, do a makeover of its once highly popular Barbie. In 2015 Mattel introduced 23 new Barbie dolls with eight skin tones, 14 facial structures, 22 hairstyles, and 18 eyes colors. And on Jan. 28, 2016, Mattel has further introduced three types of Barbie dolls: curvy, tall, and petite. The curvy doll in particular has noticeably fatter thighs and protruding tummy and behind.

Slumping Barbie sales meant that in 2015 and 2016, Mattel revamped its popular Barbie line by introducing Barbie dolls with different skin colors and physical attributes, notably including a curvy, petite, and tall versions of Barbie (photo from Mattel).

Slumping Barbie sales meant in 2015 and 2016, Mattel had to revamp its popular Barbie doll by introducing Barbie dolls with different skin and eye colors and physical attributes, notably including a curvy, petite, and tall versions of Barbie. The curvy Barbie, in particular has fatter thighs and a protruding tummy and behind (photo from Mattel).

Madeline Stuart, an 18-year-old fashion model, is yet another example of a change in people’s perception of beauty. What makes Madeline story inspiring and highly unusual is she has Downs Syndrome.

Madeline Stuart, the Downs Syndrome fashion and runway model (photo from Madeline Stuart Facebook).

Madeline Stuart, the 18 year-old doing what was previously unthinkable: being a fashion and runway model and having Downs Syndrome. She is in the 2016 New York Fashion Week (photo from Madeline Stuart’s Facebook).

Lane Bryant, a popular US retail store on women clothing, has recently been aggressively promoting a campaign celebrating women of all shapes and sizes. Print and video ads by Lane Bryant featured black-and-white pictures of six models, all of them plus-sizes.

Lane Bryant's celebration of women in all shapes and sizes (photo from Lane Bryant).

Changing the perception of beauty: Lane Bryant’s celebration of women in all shapes and sizes, featuring all plus-sizes models (photo from Lane Bryant).

And what makes the 2016 issue of Sports Illustrated (SI) Swimsuit different this year from the other years is the appearance of 28-year-old Ashley Graham, a plus-size model.

Plus-size model, Ashley Graham, will feature in the 2016 issue of Sports Illustrated Swimsuit edition (photo from SI).

Plus-size model, Ashley Graham, will feature in the 2016 issue of Sports Illustrated Swimsuit edition (photo from SI).

So, blonde women who are slim and tall and who have small waist, large breasts, blue eyes, high cheekbones, petite nose, full lips, and small chins may be considered as highly desirable by men. But such beauty standards are increasingly seen today as too idealistic, that most of women would not be endowed with one or more such features for beauty perfection. The society is instead beginning to accept imperfection because that is more real and fair. Perhaps one day then, meritocracy based purely on people’s intelligence, talent, and experience will no longer be skewed by physical and social attractiveness.

What is beauty then? It comprises outer and inner beauty. Focusing too much on our appearance can itself be detrimental, even if we are considered attractive, because it creates stress and anxiety. While our outer beauty fades with time, our inner beauty, in contrast, develops, and as it matures over time, we become an increasingly wonderful human being.

References




Do robotics activities help our children learn better?

I recently enrolled my son, Zachary, to a robotics school called Little Botz Academy. This school, which has a partnership with Universiti Malaya, teaches children mainly between ages 8 to 12 years robotics using Lego Mindstorm EV3 and Rero. Also included in their curricula are computer programming and computer practical skills.

My son, Zachary, just recently started his robotics classes at Little Botz Academy. His classes are twice a week for six months.

My son, Zachary, just recently started his robotics classes at Little Botz Academy. His classes are twice a week for six months.

Like most boys, Zachary loves Lego and robots. I too had my fair share of Lego and robots whilst growing up, but back then, Lego was not as popular or as widely available as it is now. Today, there are Lego movies, Lego TV series—and, blimey, even Lego theme parks. Robots today too have changed. No longer docile or passive of the past but more flexible, programmable, and reactive of today. So, combine the two—Lego and robots—and what we have is a integration of two very popular playthings for children. But are Lego and robots, at the end, just that—toys? Sure, they are addictive and nice to play with, as evidenced recently when one of Zachary’s friends visited us in our home, and they played for five hours straight building Lego pieces into robots, both of them stopping only for toilet breaks and coerced lunch. But ultimately, what do Lego and robots actually teach our children?

The many bots of Lego Mindstorms (photo from linuxgizmos.com).

Only some of the many bots of the highly configurable and programmable Lego Mindstorms (photo from linuxgizmos.com).

 

Another popular programmable robot is the Rero (reconfigurable robot) (photo from rero.io).

Another popular programmable robot is the Rero (Reconfigurable Robot) (photo from rero.io).

No doubt many of us would intuitively regard that robotics activities will motivate and fortify our children’s learning. But if there is one thing I have learned from science is this: our intuitions, though seemingly common sense, are not always correct.

In other words, I was looking for empirical evidence, not anecdotes, subjective experiences, or sales pitch from robotics school brochures, on how effective robotics classes would help in my son’s general learning experience.

The reality, at the end of my research, is simply this: there is still insufficient evidence on robotics’s actual impact on enhancing our children’s learning experience. But before you conclude that robotics classes are a waste of our precious money, be aware that having robotics activities in classrooms is a rather recent novelty, so expect still an unsatisfying number of studies carried out on their effectiveness. A more serious problem, however, is how these past studies have been conducted.

Most past studies that did evaluate the use of robotics activities in school classrooms are unfortunately descriptive in nature, that solely rely on teachers’ and children’s mere subjective reports of their learning experience. A recent 2012 review of studies by Benitti from the Universidade do Vale do Itajai, Brazil, for instance, found that during the ten-year period from 2000 to 2009, only ten studies had used empirical analysis to measure the impact of using robotics as a teaching aid in school classrooms.

Moreover, robotics in the past have mostly been used in a limited manner, typically in teaching topics directly related to robotics. Benitti remarked that robotics need not always be about robotics per se, but can be made general enough, without being tied down to any academic area or scope, to accommodate to the children’s interests, whatever that may be. Children who are interested in cars, for instance, would apply what they have learned from robotics to create motorized vehicles, or even children who are interested in music or arts to create interactive sculptures.

Even though limited in number, the ten studies found by Benitti are nonetheless comprehensive enough in scope, covering a total of over 1,200 school students from ages 6 to 15 years old and from various countries. More importantly—to me, at least—that these studies were specifically designed to determine the effectiveness of using robotics activities not to teach robotics per se but to enhance children’s learning in STEM (Science, Technology, Engineering, and Mathematics) topics.

The outcome from these ten studies are promising. They generally report that students in classes that had robotics activities scored higher in exams related to maths, computer programming, robotics, engineering, and physics than those in the control group (classes without any robotics activities). Also encouraging is robotics activities made students more intellectually stimulated and engaged about the topics being taught. The students in one robotics-aided class, for example, showed a greater understanding and appreciation in evolution topics and were more engaged in classroom discussions among their peers than those in the control group. One study found tentative evidence that the use of Lego had helped one group of students, those who perform averagely in mathematics, to improve their maths scores a year later.

Nonetheless, merely having robotic activities in the classrooms is no guarantee that they would succeed to enhance learning. There have been reports where no improvement in learning were observed. Even after a year of Lego robotics training, for instance, about 200 students in several schools across Sweden performed overall no better in mathematics and problem solving than those who did not receive any Lego training.

Consequently, the effectiveness of robotics activities in enhancing learning depends on several factors, some of which, as asserted by Lindh and Holgersson from the Jönköping International Business School, Sweden, are: 1) children must be given enough space in the room to work with their robots, 2) no more than two or three students to be assigned to a single group working on a single robot or activity, and 3) the robotic tasks given to the students must be specific, realistic, and be related to the currently taught topics at schools. But the most important criteria of effective robotics training is ultimately the teacher, who must not only be knowledgeable in robotics, but also have a positive attitude and be motivated to steer the children’s learning process.

Scientific evidence about the effectiveness of robotics activities may still be lacking or not be entirely convincing. But just like the progress of any other scientific enquiry, I am sure, over time, the effectiveness of robotics training will eventually become increasingly clear with mounting evidence. Without doubt, robotics classes are becoming increasingly popular today, especially among children, and scientists would want to establish their efficacy.

So, at the end, it is important to have realistic expectations about the effectiveness of robotics classes. Yes, such classes can be effective, but much depends also on the school itself: their robotics curricula, how the school carry out their classes, and the kind of learning environment they create. Little Botz Academy, my son’s robotics school, does appear to have the right ingredients, as I have listed earlier, but I am not sending my son there because I have become totally convinced about the effectiveness of robotics activities. No, I am sending Zachary there because I see that he enjoys playing with Lego and robots, and I am sure some meaningful learning outcome will emerge as he designs, builds, and programs his robots. It is also important to allow Zachary discover if his fascination and enjoyment of Lego and robots would go beyond of just being toys to something more meaningful and life-changing.

But most of all, I want my son to learn robotics because I do not want him to grow up thinking that learning becomes meaningful only in the absence of fun.

Zachary having a go with his Lego Mindstorms set.

Zachary having a go with his Lego Mindstorms set during class.

 

References

  1. Benitti F.B.V. (2012) Exploring the educational potential of robotics in schools: a systematic review. Computers & Education, 58, 978-988.
  2. Lindh J., Holgersson T. (2007) Does lego training stimulate pupils’ ability to solve logical problems? Computers & Education, 49, 1097-1111.



Burden of our false races: Defeating racism and the myth of race in Malaysia

We Malaysians are defined by our races. Racial thinking is deeply entrenched and ubiquitous. It pervades our society such that our race determines our opportunities and experiences in education, work, religion, culture, friendship, romance, and politics. Our race affects how we interact and how we view others. Even whom we support in politics is determined by our race.

Our lives, however, are governed by a myth. My race, so as yours, are false.

As long as seven decades ago, Ashley Mantagu in 1942 wrote in his book Man’s Most Dangerous Myth: The Fallacy of Race that human races did not exist. Human races did not have any evolutionary basis, and they could not explain differences among human populations. People who still believed in races, Mantagu wrote, were old in their thinking. And in 1950, based on the findings of an international panel of anthropologists, geneticists, sociologists, and psychologists, UNESCO issued a statement that all humans belong to the same species and that race is not a biological reality but a myth.

Since then, increasing scientific evidence have continued to fortify the notion that race is nothing but a myth. Science has shown, for instance, that no biological relationship exists between race and intelligence, talent, law-abidingness, or economic performance, just as there is no biological relationship between race and skin color, eye color, blood group, height, skull shape, facial features, hair color, or hair texture.

Human races are not an evolutionary outcome of nature but a human invention. Race is a weapon—a powerful ideological tool—used to divide, subdue, and control people. Race is a way to institutionalize human diversity by placing people into racial categories and using these categories to shape public policies. Ironically, by preventing debates on racial issues in Malaysia, presumably to maintain social order and harmony, has helped further entrench people into their race. Race loyalty, advocacy, and activism breed further polarization, intolerance, discrimination, and inequality.

Race is both a social construct and a social contract. Not only have we allowed ourselves to be divided into race groups, we have also allowed our lives to be structured and controlled according to our race. That we have allowed all these to happen to us is not the worst; the worst is we Malaysians are disturbingly zealous to our race and adopt a “siege mentality” to preserve and defend our race. We, once the creators of our race, are now acquiesce to our creation.

Race is a modern invention because it did not appear until the mid-17th century. Before then, it was ethnocentrism, not race, that separated people. Some people may believe themselves to be culturally superior to others in terms language, diet, adornment, conduct, and religion. That slaves may have a different “race” or skin color from their owners hardly mattered; it was the differences in religious affiliations that was the main cause of the subjugation of slaves. Not even in the 1600s did the early English colonists view their “black” slaves in racial terms.

But as European exploration and colonization became increasingly widespread and established, racial thinking became prevalent. Race was invented to justify slavery, to preempt slave revolts, and to control and oppress certain groups of people. Race created hierarchies among peoples, and race became an effective tool to foster contempt of whites on blacks and people of other skin colors. The 18th century was the great age of scientific classification of biodiversity. Unfortunately, it was also the period that started misguided attempts, such as by Carl Linnaeus, the well-known Swedish naturalist, to classify humans into races. The American and European races, wrote Linnaeus in his 10th edition book of “Systema Naturae”, were merry, free, gentle, acute, inventive, and principled, whereas the Asian and Africans races, in contrast, were haughty, crafty, indolent, opinionated, and impulsive.

That races do not exist seems counterintuitive. Take an Indian, a Malay, and a Chinese, for example. Few of us would have little difficulty in telling them apart. One distinguishing characteristic between them is their skin color: Indians generally have the darkest skin tone, followed by the Malays, and the Chinese the lightest tone. But skin colors do not change abruptly. They are graded, varying gradually along a color gradient. Travel by land, for instance, from Nairobi, Kenya to Oslo, Norway, and you will notice that people’s skin color vary gradually from black to brown and finally to white, but there is no point along the color gradient that separates any neighboring colors. So, if we use skin color as a basis of racial classification, at what cut-off points on the color gradient do we use to classify people as having white, brown, or black skin?

Skin color is one distinguishing feature that sets apart Indians, Malays, and Chinese, but skin color, like any human traits, is an unreliable criterion for racial classification (photo from Choo Choy May, themalaymailonline.com.my).

Skin color is one distinguishing feature that sets apart Indians, Malays, and Chinese, but skin color, like any other human traits, is an unreliable criterion for classifying human variations (photo from Choo Choy May, themalaymailonline.com.my).

Not just skin color, but hair color and the distribution of blood types, in particular the group B type, vary gradually as well. From west to east Europe or from southeast and northeast Asia to central Asia, increasingly more people would have the B blood group gene. And in Australia, moving farther inward the country and away from the coast, the number of supposedly single-raced Aborigines with yellow-brown hair (blonde) increases and those with black hair decreases.

Dark-skinned Melanesians and Aboriginal Australians do not all have black hair. They can have traits found on "Caucasions": red and blonde hair (from telegraph.co.uk)

Dark-skinned Melanesians and Aboriginal Australians do not all have black hair. They can have traits found in “Caucasians”: red and blonde hair, for instance … (from telegraph.co.uk)

... and blue eyes too (from afritorial.com).

… and for some, blue eyes (from afritorial.com).

Human races do not exist because human diversity cannot be compartmentalized into mutually exclusive groups. Individuals can simultaneously belong to two or more races whichever set of classification criteria we try to use or develop. This is because there will always be overlapping race groups such that a person in one race group would likely have the characteristics or traits from other race groups. For instance, blonde hair is often associated with people with light skins. But this is not always true. Five to ten percent of dark skinned Melanesians (as well as Aboriginal Australians), for instance, have blonde hair. Even among the dark skinned populations in India, Sri Lanka, and Central Africa, people there can vary widely in other traits, so it is possible they can be differentiated to other races even though their skin tones are similar to one another. This shows that depending on how we define race, a person’s race can change, and we can have as few as one race or as many as tens, if not hundreds, of races in any given human populations.

Human diversity is multifaceted and involves overlapping traits from other groups of people (from "Are We So Different" art exhibit: understandingrace.org).

Human diversity is multifaceted and involves overlapping traits from other groups of people (from “Are We So Different” art exhibit: understandingrace.org).

Racial classifications will always fail because races are difficult to define, and there are no impartial and consistent rules for deciding what constitutes a race or to what race a person belongs.

Make no mistake. Human variations are real. They are just not caused by race. Instead, human variations are caused by evolution and natural selection. Modern humans evolved in Africa about 150,000 to 200,000 years ago, and some of them started to migrate out of Africa about 50,000 to 100,000 years ago. As different groups of people travelled into different parts of the world, each group picked up new genetic mutations which would not be present in the original population whence they came or among those who took different migration routes. It is tempting to believe that these genetic differences between human populations would be large enough that we could use them for racial classification. This is not the case.

Human variations are diverse and wonderful. Good luck in trying to classify these people into scientifically valid races (from smithsonianchannel.com).

Human variations are real, diverse, and wonderful. Classification by “race” is worthless simply because it cannot categorize our full range of diversity (from smithsonianchannel.com).

In 1972, geneticist Richard Lewontin, in a landmark study, showed that most human genetic variations were found not between human populations but within the same population. Subsequent studies, in particular by Rosenberg and his colleagues in 2002, have confirmed this to be true. The astonishing facts are simply this: “race” accounts only 5% or less of all human variations. Instead, nearly all of human variations (93 to 95%) are between individuals of the same population. To put it another way, there are much more genetic differences between individuals of the same “race” than individuals of different “races”. Why is this so?

Human variations are the largest where humans lived the longest. Modern humans arose in Africa, and it is here where humans lived the longest. This means they had more time here than anywhere else to accumulate the most genetic changes. When humans migrated out of Africa, they brought out with them only some (but not all) of these genetic variations because only some individuals from Africa migrated. Consequently, the genetic variations these travelers picked up were a subset of those who stayed in Africa. And this is exactly what was discovered by Yu and his colleagues in their 2002 study. They found more genetic variations between two Africans than between an African and non-African. Although those who migrated out Africa had accumulated new genetic mutations not found in the original African populations, these mutations occurred only on a small set of genes, those needed to function differently in the travelers’ new environment.

Lastly, human races do not exist because human migration out of Africa was too recent in history and the various human populations, despite being scattered over the world, were not isolated enough from one another for racial differentiation.

…there are much more genetic differences between individuals of the same “race” than individuals of different “races”

Science has shown that we are all related, that we are all mongrels—not purebreds—with intertwined and primal ancestry, and that we are all essentially Africans under our skins. Race as a concept or idea has been out of date for more than seventy years, so why is it that very few Malaysians even today are aware of this fact? Why haven’t we been made aware or our children been taught in schools that race is purely a myth?

Defeating racism begins with us understanding our human origins and why people are different from one another on a genetic not racial basis. But this is only half the struggle. Prof. Mark Cohen, an anthropologist from the State University of New York and book author of Culture of Intolerance: Chauvinism, Class, and Racism, argued that we should make it mandatory that our children be taught cultural relativism: the comparative study of human cultures. Merely learning that race is myth because it does not explain human diversity is insufficient. We also need to learn that culture distinguishes one group of people from others. When people refer to “race”, they often actually mean “culture”.

As Prof. Cohen explained, “The key point is that what we see as ‘racial’ differences in behavior may reflect the fact that people have different values, make different choices, operate with different cultural ‘grammars’, and categorize things (and therefore think) in different ways.”

So, it is about various cultures, not races, that we should examine. We need to examine what people are doing and try to understand their behavior in context of their culture and situation. We need to understand that although we do not share the desires or perceptions of people from other cultures, we nonetheless recognize that our desires or perceptions in our culture can appear just as arbitrary, unusual, or different to others. When we understand and appreciate the diversity of cultures and understand people’s behavior and actions in the context of their cultures, can we then realize that there is often more than one pattern for human perceptions, desires, and points of view. This in turn increases tolerance and freedom of thinking and less fundamentalist manners.

Like race, culture is a human invention too. If we fail to realize this, we risk substituting culture for race and ethnocentrism for racism, and we risk having our lives be compartmentalized and constrained by the arbitrary rules of our culture instead. According to Kenan Malik, author of Strange Fruit: Why Both Sides are Wrong in the Race Debate, people should not be subjugated by their cultures, where their identities and behaviors are chained to their culture. It is time to realize that people are free agents, rational and social beings who have the power to transform themselves and their societies through rational dialogue and activities for the better and overall good. We create and shape our culture, so why do we behave as if we are subjugated by our culture?

caption

“Strange Fruit: Why Both Sides are Wrong in the Race Debate” by Kenan Malik

In multicultural societies, Malik argued, people are not seeking to maintain cultural differences or even equality, but they are instead seeking equal political opportunities. Many multicultural societies are failing today because of cultural attachments, that people are linked to their culture groups and are treated accordingly.

“A truly plural society,” Malik explained, “would be one in which citizens have full freedom to pursue their different values or practices in private, while in the public sphere all citizens would be treated as political equals whatever the differences in their private lives.”

So, yes, Malaysians’ diversity should be celebrated, but we should not have our diversity chain us into predisposed identities, behaviors, and reasoning, or have our diversity segregate us into immutable and intolerant groups, or have some of us receiving unfair opportunities.

Despite our diversity, we Malaysians are equal to one another, and until we realize and value this, “racial” and cultural prejudice will continue unabated in our country.

(from FB group: "1 Million Likes to Say No to Racism in Malaysia").

Some get it — but for most of us Malaysians, we are disturbingly zealous to our non-existent race (from FB group: “1 Million Likes to Say No to Racism in Malaysia”).

References

  1. Cohen. M.N. 1998. Culture, not race, explains human diversity. The Chronicle of Higher Education, April 17, 1998. XLIV(32): B4-5.
  2. Goodman, A.H., Moses, Y.T. and Jones, J.L. 2012. Races. Are We So Different? John Wiley & Sons, West Sussex, UK.
  3. Lewontin, R. 1972. The apportionment of human diversity. Evolutionary Biology, 6 : 381-398.
  4. Malik, K. Why both sides are wrong in the race debate. Article from Pandaemonium blog. March 4, 2012.
  5. Malik, K. What is wrong with multiculturalism? Part 1 and 2. Article from Pandaemonium blog. June 4 and 7, 2012.
  6. Rosenberg , N.A., Pritchard , J.K., Weber, J.L., Cann, H.M., Kidd, K.K., Zhivotovsky, L.A. and Feldman, M.W. 2002. Genetic structure of human populations . Science, 298: 2381-2385.
  7. Sussman, R.W. 2014. The Myth of Race: The Troubling Persistence of an Unscientific Idea. Harvard University Press, Cambridge.
  8. Yu, N., Chen, F.C, Ota, S., Jorde, L.B., Pamilo, P., Patthy, L., Ramsay, M., Jenkis, T., Shyue, S.K. and Li, W.H. 2002. Larger genetic differences within Africans than between Africans and Eurasians . Genetics, 161: 269-274.



Malaysian social media regulation? Welcome to the dark side of social media

We thought the social media such as Facebook are a boon to finding out only the truth. We believe this because information flows unrestricted and uncensored from an open, diverse, and hyper-connected network of friends, friends of friends, and freedom fighters in the social media world. There are no gatekeepers here. No one decides which information goes forward and which does not. Information flows to you quickly and unbridled from censors and manipulation from an authoritarian and paranoid regime. But think about it: why should the unbridled social media disseminate only the good and the truth?

The freedom we have on the social media is the same freedom bestowed on those driven by personal and political agendas to spread their misinformation and propaganda. Information that goes viral are those that provoke anger and shock, so what better way to create viral messages than to spread hate?

Consider the recent Lowyat incident that begun as a run-of-the-mill mobile phone theft but that soon mutated into a racial fight, encouraged by the spread of misinformation on the social media and blogs. Or consider the “social media experiment” by CAGM (Citizens for Accountable Governance Malaysia) who deliberately spread misinformation about our Prime Minister to bring home the point that our media thrive on reporting sensational news.

Dear naïve Malaysians, welcome to the dark side of social media.

Social media, in particular Facebook, help to polarize people and encourage herd mentality. We are naive to think the use of Facebook will promote national unity. Facebook actually accentuates differences between groups of people. Facebook’s collaborative filter helps us to find like-minded people: those who share our beliefs, ideas, and perspectives. When everyone in our circle of friends think alike, is there room for a greater understanding of opinions and perspectives that are different from ours? Facebook’s news algorithm further selects news that matches our interests and beliefs. To have an open and effective discussion and learning experience, we need to have a diversity of opinions and point of views. Instead, Facebook encourages herd mentality. Facebook validates and entrenches our existing stance and opinions.

Does Facebook encourage group polarization? The social media "fileter bubble" technology filters and chooses news and information that matches our interests and opinions. We further choose to read articles that matches our opinions and views, so at the end, our views are not challenged but entrenched (from Bakshy et al., 2015).

Does Facebook encourage group polarization? The social media “filter bubble” technology filters and chooses news and information that matches our interests and opinions. We further choose to read articles that matches our opinions and views, so at the end, our views are not challenged but entrenched (from Bakshy et al., 2015).

Facebook is our “echo chamber” where we only hear, see, and click on what we want to hear, see, and click. On social media, we insulate ourselves from news and views that are different from ours.

Furthermore, the 2014 survey by the Pew Research Center found that social media stifle debate. They found that social media users were less willing to discuss the Snowden-NSA controversy in social media than they were in person. 86% of Americans reported that they were willing to debate this issue in person, but yet only 42% of those who use Facebook or Twitter were willing to debate such issues on social media. Moreover, the survey also found that people were more willing to share their views on social media only if they thought their audience agreed with them. Likewise, a 2013 study by Carnegie Mellon University found that people tend to self-censor more their social media posts or comments if the topic of discussion is highly specific or if the audience are less defined. People self-censor more, out of fear of offending others, instigating an argument, disagreeing with others, or being criticized by others. Facebook can promote racism, as reported by a 2013 study by two US psychologists. They observed that prolific Facebook users were more susceptible than by casual users to negatively racial postings on Facebook.

Prof. Susan Greenfield, who is also a member of the British upper house, is a vocal critic of social media. Social media, she explains, promote narcissism and reduce empathy and self-identity, especially among the youths. Words are only 10% of the social cues in communication, so connecting with others via social media deprives people of the other vital social cues. Consequently, Prof. Greenfield explains, social media make it easier to insult others without noticing the repercussions the insults have on the victims. A recent 2014 study in the US revealed that when preteens were not allowed to use any screen-based media and communication tools for only five days, their interpersonal skills improved.

Prof. Susan Greenfield is a vocal critic of social media which she explains encourages narcissism and reduces empathy and self-identity. Social media also makes it easier to insult others (from telegraph.co.uk).

Prof. Susan Greenfield is a vocal critic of social media which she explains encourage narcissism and reduce empathy and self-identity. The social media also make it easier to insult others (from telegraph.co.uk).

The social media have a very dark side. Far from being some utopian tool of truth, democracy, and social justice, the social media can also be a tool of misinformation and hidden agendas, a playground for malicious attempts to divert and encourage people to believe and behave in a certain given way. Social media can stifle, not promote, debate, and they amplify differences between groups of people. Social media discourage tolerance and understanding of people who have different of beliefs and opinions from ours.

Racism is rife in Malaysia, including in public universities. We are naive to think social media will reduce racism. It might instead promote racism (from www.lukeyishandsome.com)

Racism is rife in Malaysia, including in public universities. We are naive to think social media will reduce racism. It might instead promote it (from www.lukeyishandsome.com)

References




The good, meaningful life without God and religion: Malaysian atheists speak out

At the extreme end of the religiosity scale and obstinate against the rising tide of religiosity in the country are a small number of Malaysians—no more than 1% of the country’s population—who are atheists. Freethinkers, agnostics, and nontheists, as they are sometimes known, are merely different shades of the same meaning: an unbelief in any God and religion, or at least, a conviction that God and religion are unimportant, if not irrelevant, in their lives.

Some think it unnatural and disconcerting, perhaps even suicidal, for anyone to willfully forsake all religions. How can anyone, without religion, decide what is wrong and right, for instance? How can anyone be good or have a meaningful life without divine help?

Who are they, these unbelievers?

Atheism is no longer fringe but growing. Thirteen percent of the world’s population in 2012 are atheists, an increase by 4% since 2005, and, within the same period, world religiosity has declined by 9%. But whether religiosity rises or falls depends on where you are. Vietnam, Ireland, Switzerland, France, South Africa, Iceland, Ecuador, the US, and Canada are among the countries that have witnessed the largest decline in religiosity by between 10 to 20%.

But Malaysia has instead witnessed a rise in the number of religious people from 77 to 81% of the country’s population and a fall in the number of atheists by 4% between 2005 and 2012. Whereas people’s religiosities tend to decline with their age, Malaysians’ religiosity remains unwaveringly sky high across all age groups, from 15 to 54 years. Furthermore, nearly two-thirds of religious Malaysians are fundamentalist—those adamant that their religion is the one true religion and the only truth.

If forsaking religion is bad, there should be some evidence that secular societies tend to fail or be worse off than religious societies. Yet, scientific studies consistently show the opposite: that people in secular countries, compared to those in religious ones, are more involved in charity work; are more trusting of strangers; have higher IQ scores; have lower levels of prejudice, ethnocentrism, racism, and homophobia; show greater support for women’s equality; are more appreciative of science; and have higher rates of subject well-being. Secular countries also show higher economic growth, higher democratic stability, and better governance than religious countries.

Such trends persist even in Malaysia. World Values Survey Wave 6 (2010-14) showed that, among Malaysians, religious people were more intolerant of other races and religions than the atheists were. For instance, a third of religious Malaysians indicated they would not want neighbors of a different race or religion, compared with only 9% of Malaysian atheists. Furthermore, the atheists were between 10 to 30% more supportive of women’s equality in marriage, education, job, and politics, and by as much as 38% more appreciative of science, compared to the religious.

Science in the religious Arab world has regressed since the 13th century. No major invention or discovery has emerged from the Muslim world for the past eight centuries. It is frightening to learn that people in the UAE countries read only an average of one book per decade, and that Spain has translated more English books into Spanish in one year than the whole Arab world into Arabic in the last 1000 years. Whereas the world spends an average of 2.2% of a country’s GDP on science in 2010, the Arab countries only 0.1 to 1.0%. The Arab world contributes only 1.4% of the world’s scientific papers and 0.1% of international patents. Furthermore, the entire Arab region can only boast of two Nobel laureates in the sciences, compared to more than 120 Jewish scientists. OIC countries have only 8.5 scientists, engineers, and technicians per 1000 population compared to the world’s average of 40.7 and OECD countries of 139.3.

Arab astronomers. Since its glory days, there has no significant Muslim invention or knowledge breakthrough for the past eight centuries (image from utopiaordystopia.com)

Arab astronomers. Since its glory days, there has no significant Muslim invention or discovery for the past eight centuries (image from utopiaordystopia.com)

Correlation is not causation, of course. But societies appear to thrive, not collapse as they should, when religion is absent or exert little influence.

But, for some, being an atheist in Malaysia is difficult, if not dangerous. For ex-Muslims, coming out of the closet as an atheist is always an unsafe option, for severe discrimination and prosecution await them. Malaysia is among the most religious countries in the world and the least tolerant of unbelievers, as revealed by a 2012 report by the International Humanist and Ethical Union. Even our Prime Minister called humanism, secularism, and liberalism “deviant” and a “threat to Islam and the state”.

Amir (not his real name), who is 24 and a recent university graduate, is also both a Malay and an atheist. Having studied in many countries (both secular and Islamic) for nearly all his life, Amir has been exposed to a much greater diversity of cultures and outlook than most Malaysians have.

Amir described to me one of his early struggles with his faith: “[Imagine] you are in an international school and you are the only Muslim in the class. You look at everybody, and you think how could all of them be going to hell just because they don’t believe in the same things that I might have believed in. They are all going to hell even if they are not bad people … That was one of the first times I thought about atheism.

“When you realize that there are a lot of different ways of living, you find that maybe [what] you have been taught isn’t necessarily the right one.”

Amir’s mindset is just too different from the other Malays, so it is not surprising to learn that he has no Malay friends. Even the few he once had in the past eventually distance themselves from Amir.

“When I did tell them that I was an atheist—that sort of screwed things up,” Amir quipped. “It’s like there’s something wrong with [me]. [This happens] even with someone I thought I was getting along with previously. Some unspoken barrier comes up.

“I find even the religious moderates in this country, by my standards, to be quite religious.”

Each atheist has a different story to tell. Not all are like Amir, of course, who understandably has to keep his atheism a secret from his religious parents and from the society. Apart from Amir, none of the other atheists I met experienced any appreciable prejudice or discrimination because of their atheism.

Two other atheists I met were Willie, age 34 and a local university lecturer, and Kok Sen Wai, age 29 and a medical officer. Both are open atheists and outspoken about their atheism. Willie, in particular, has given many talks about rational thinking and humanism issues within and outside the country.

Willie, age 34 and a lecturer at a local university.

Willie, age 34 and a lecturer at a local university.

Both Willie and Sen Wai share a similar past. Both were once pious: Willie as a Christian and Sen Wai a Buddhist, and both begun their slide toward atheism by asking too many questions: first, of their own religion, then of other religions.

“I started by comparing the different sects of Christianity: Anglican, Catholic, Methodist, and so on,” Willie recalled. “When I was going through all of them, I realized that there are lots of different interpretations of the holy texts. Then I started checking out other religions as well. I actually read a translation of the Quran, and I looked up Buddhism and Hinduism. After a while, I figured out that there doesn’t seem to be a correct way, like the perfect way, of interpreting all of them.

“There is no proof. When you are faced with a question of whether something exists or not, you would actually require proof of it before you start believing in it.

“In the beginning, I considered myself an agnostic … but, really, I discovered I was basically an atheist by definition.”

Sen Wai’s story is similar: “I guess this was the point in my life [after examining the different religions] when I realized that acquisition of [further] knowledge is fruitless if I am unable to tell if what I have learned is true or false. Though I did not know it at that time, I had inadvertently become a skeptic.

“The more you learn about religions, the more you realize that they are all alike in some way or another. They all demand faith that exceeds reason. Some of them even teach objectionable lessons that offend my conscience… I had stopped searching. I had come to accept that the well of religion is dry. I had become godless.”

While religious issues frequently occupy Willie’s and Sen Wai’s thoughts and concerns, Joey, who is 21 and a local university student, is rather indifferent. He has never been religious, so sliding into atheism for him was rather effortless, perhaps even inevitable.

“My family and I were Christian-ish, but who do not go to church, do not pray, do not say grace before we eat, and do not do anything that is Christian,” Joey explained. “I used to think that although I do not worship [God]—but if I am a good guy—maybe I will go to heaven.

“I was a freethinker for a while after that. But when I entered university, I hung out with some other atheists in my campus…and began to call myself an atheist. I wasn’t strong in my faith anyway, so it was easy for me [to be an atheist].”

But for many people, their conviction on atheism are often realized when they fail to find satisfying answers from religions such as the case for Willie and Sen Wai, or when they find religions offensive such as the case for Geetha, age 27 and a physiotherapist. That Geetha is also a feminist is important.

“All religions are essentially the same. They degrade women,” Geetha complained. “Women are seen as lower class and expected to conform to men’s expectations. The Indian culture and Hinduism are closely related to each other. I was in a culture and religion that disrespected women, that controlled women on how they should look and behave, for example. There’s no equality: women are a discriminated lot and expected to be submissive.”

Geetha, age 27 and a physiotherapist

Geetha, age 27 and a physiotherapist.

Atheists are sometimes regarded by others as rude, arrogant, and who are just as guilty as the religious fundamentalists in imposing their opinions onto others. The truth is the atheist community is diverse in many ways, one of which is by how atheists feel and react towards religion. Amir, Willie, Sen Wai, Joey, and Geetha exemplify such as a community.

While Joey is rather indifferent to religious people and religious issues, Willie is more diplomatic and wishes more for a rational but calm engagement with religious people.

“I chose to intentionally label myself as an atheist,” Willie revealed. “Part of the reason is to foster the conversation, to force people to ask the question on ‘What is this atheism?’ and the topics around it.”

Sen Wai and Geetha, in contrast, are less diplomatic.

“Religions are somehow considered sacred,” Geetha griped. “Nothing you can say about religion can be seen as constructive. Our arguments are always perceived as hostile by the religious.”

“If atheists are arrogant and disrespectful for calling Christians stupid,” Sen Wai added, “then one has to consider the Bible to be worse because Psalms 14:1 describes nonbelievers as stupid, evil, and incapable of doing good. Islamic preachers claim that my wife and I, being Kufrul-Inkaar, deserve to be tortured in hell. What can atheists say that are more arrogant and disrespectful than what religious people are saying about atheists? I am sure that rude, boorish atheists do exist (as they do in any group of people), but given how atheists in general are constantly being insulted and threatened by religious adherents, I am inclined to excuse them.”

But what about morality? Could atheists be both godless and moral?

“Morality is ingrained within us,” was Geetha’s response. “Morality follows a simple, basic rule: don’t hurt others. Yes, religions have good moral values, but they do have some very bad ones too.”

For Sen Wai: “My morality comes from my innate primate sense of empathy and altruism: my conscience. So far, it has served me well. For example, while most world religions denounce homosexuality, I see no wrong in the love of two persons of the same sex so long as it is consensual and harms no one. Also, I can empathize with gay lovers. I ask myself, ‘What if I love someone but I am forbidden to do so?’ That would be tragic and unfair. I would further assert that the absence of religion would actually make it easier for us to do right by our fellow men in this case.”

“Even in the absence of moral authority [from religion], you can actually figure out what is right or wrong based on how it affects people” Willie added. “Evolution has helped to select people who do learn to live cooperatively, so basically, surviving together is always better than surviving individually. And the laws or values that actually help the society should be the [morality] that move us forward.”

Willie’s answers are reminiscence of utilitarianism: that we should do whatever that will produce the best overall consequences for all concerned, and of the Golden Rule: “that which is hateful to you, do not do to your neighbor”. In other words, morality is decided on the basis that we do whatever it is the best, without bias, for everyone, and that we treat everyone as we like ourselves to be treated.

“[Morality] is actually quite an easy and straightforward issue to deal with,” Willie further explained. “It is just that people have the background that they must somehow be told what is right and wrong. At the end of the day, somebody who actually figures out and decides to do things because he knows it is right is a more moral person than somebody who does something because he is told it is right. So, if you believe and you do it, you are actually an agent of good, but if you are told to do it because it is good, then you are nothing, you are a robot, just following instructions. That’s just dumb, not moral.”

Like many Malaysians today, the five atheists I met each expressed concern over the rise of religious fundamentalism in the country.

“I am frightened at the rate by which we are losing our country to religious fundamentalism,” Sen Wai agonized. “Issues like Muslims touching dogs and gymnasts wearing leotards, which did not seem to matter in the past, are now headlining news. I am no political analyst, and I do not pretend to know the solution, but a government which continuously exploits racial and religious schisms cannot be healthy for a nation’s sense of unity.”

Yes, dogs are nice to see and even nicer to touch, but if you are a Muslim, you are forbidden (haram) to touch dogs (image from financetwitter.com).

Yes, dogs are nice to see and even nicer to touch, but if you are a Muslim, dogs are haram, and you have to curb your innate urge to touch them (image from financetwitter.com).

“Malaysians weren’t like this before this,” Willie exclaimed. “In the past, you even have an advertisement of Guinness that said, ‘Guinness: [baik untok kita]’, and you had Malays in that ad. The fundamentalism wasn’t there in the early days of the country. So, how did we even get to this? There are a lot of scholars who went to these Arab countries, and they brought back a lot of the values that they actually saw from those countries which wasn’t actually here in the early days. The whole idea that there is only one way to be a Muslim or one way to be a Muslim country is ridiculous…I think a lot of people [from a lack of reference] have lost sight of Malaysia’s own past.

A Guiness advertisment in 1968, picturing two Malays (presumably Muslims too) in an ad for an alcoholic drink (image from hareshdeol.blogspot.com).

A Guinness advertisment in 1968, picturing two Malays (presumably Muslims too) in an ad for an alcoholic drink (image from hareshdeol.blogspot.com).

An early ad from the 1970. Good, old days, or sinful, old days? (image from nurulrahman.com).

Your aurat is showing, miss. An early ad from the 1970. Good, old days, or sinful, old days? You have to wonder how people in the past managed to get to heaven (image from nurulrahman.com).

“The opposite voice is not being heard. People don’t dare to speak out, especially from the politicians … You [also] have politicians who are saying secularism is bad for the country. This is a very sad state of affairs.

“We are living in a world that is enormously globalized, and it is very seldom where you can go to a country without actually seeing many Christians and Muslims living side by side regardless of which majority is in power. So, if you impose one set of fundamentalist values based on religion then you will run in contrary with others. So, in today’s world—especially in today’s world—you can no longer run this one-kind mind where only this set of values is the right one. The only way to apply all sets of values fairly to everybody is actually the secular kind of system.”

For Geetha, she fears the rise of religion fundamentalism will create a society that is increasingly irrational and less open. But it is women’s rights, she fears the most, that will be the hardest hit from increased religious fundamentalism.

When I asked her what the country should do, she simply said, “Keep religion out of politics.”

You might think the atheists, having forsaken their religions, would be happy to see the back of religion or glee at its destruction. Remarkably, none of those whom I interviewed desired to destroy religion even if given a hypothetical chance.

“I would rather promote science than to destroy religion,” Geetha revealed, “because science encourages critical thinking. Destroying religion is pointless. I have many friends who are religious, but they are also liberal in their thinking.”

“I think religion is natural, like the most natural human thing.” Amir opined. “Religion becomes people’s identity, especially during times of trouble and persecution. Strip a person of everything, and a person’s religion is only that is left.”

“If I destroy religion, will I also destroy its culture?” Joey asked. “I don’t like religion when it affects people’s decision-making. But I like the culture that comes from religion[such as its festivals and celebrations].”

Destroying religion means denying people their religion. And that would exacerbate, not resolve, human conflicts because for many people, their identity, self-worth, and culture are derived, sometimes in large parts, from their religion. All human civilizations, past and present, have been influenced with varying degrees by religion, giving rise to amazing creations of religion-influenced art and architecture. Destroy religion and the world could be poorer for it. I can appreciate why Amir and Joey are reluctant to see an end to religion.

For Willie and Sen Wai: freedom of choice means freedom to believe even in religion.

“Fundamentally, we must give human beings choice,” Willie explained. “That means, even making sure the false choices are still available. You cannot tell somebody that ‘You must reach a [certain] conclusion’. You can hope they reach the correct conclusion. The whole idea of promoting science or scientific literacy is that humanity will become an intelligent species who will work based on evidence. Even within science, the principle is always to question yourself. At the end of the day, you must make sure everyone has the freedom to [even] make their own mistakes and to figure out their paths.”

“I think it is neither possible to be rid of religion entirely nor do I want to,” Sen Wai answered. “I believe in secularism. I believe that people should have the freedom to believe in whatever they want to believe, so long as they do not harm anyone by it or try to force others to comply with their beliefs.”

I came away from my research enlightened that far from being deluded, immoral, or aimless, atheists can be very clear and articulate on their principles, stance, and concerns. Without religion, the atheists have found freedom, not to inexorably fall into a life of aimlessness, depravity, and despair, but freedom to discover that having a moral and meaningful life is not only desirable and possible, but also a better outcome than that prescribed by religion. Unlike the religious who are fixated on the afterlife, these atheists are instead much more focused on the here and now, on whether they are making full use of their single finite life, for the afterlife, to these atheists, is a simply a lie.

Ricky Gervais, the English actor and comedian, said it best about living his atheist life: “[When I die,] it’s the end of something glorious, so I have to pack it all in. But, you know, I’m not depressed about it. I don’t want to die any more than anyone else. And I think there’s this strange myth that atheists have nothing to live for. It’s the opposite. We have nothing to die for. We have everything to live for.”


I like to thank Amir, Willie, Sen Wai, Joey, and Geetha for their time and frankness to be interviewed for this article. They are members of MAFA (Malaysian Atheists, Freethinkers, Agnostics and Their Friends), a social and discussion group on Facebook.

References

  1. Charities Aid Foundation. 2014. World Giving Index 2014. Charities Aid Foundation, Kent, UK.
  2. Hoodbhoy, P.A. 2007. Science and the Islamic world – The quest for rapprochement. Physics Today, August 2007. pp. 49-55.
  3. WIN-Gallup International. 2012. Global index of religion and atheism. Press Release. Zurich, Switzerland.
  4. Zuckerman, P. 2009. Atheism, secularity, and well-being: How the findings of social science counter negative stereotypes and assumptions. Sociology Compass, 3: 949–971.