Latest Entries »

 AQA 3.2.1.3; Edexcel B 2.6 (vi); OCR A 2.1.1 (a, b, e, f); OCR B 2.1.1 (a(i), d, i)

Definitions

Let’s get some boring definitions out of the way before getting onto the good stuff. First of all, what is a microscope? Although this may sound like a rather condescending question, many would find them difficult to specifically define. A microscope is an optical instrument with a lens capable of magnifying an object. The term ‘microscopic’ is used to describe those objects that cannot be seen without the aid of a microscope.

So what about magnifying? Does this just mean making objects appear larger and being able to see them in more detail? If so, why the need for so many different types of microscope? Well there are multiple truths there. Magnify does simply mean to make something appear larger than it really is, and can be defined by the equation: magnification = size of image / size of actual object. Let’s break this equation down slightly – magnification has no units, and is described by the number of times the object has been enlarged – for example X100. The size of the image can be measured using a ruler and the picture the good people (HA!) of your exam boards will give you.

An eyepiece graticule and stage micrometer can also be used. The graticule comes built in to many microscope models, and basically consists of a series of increments that can be compared to increments of fixed length (the stage micrometer, which is usually placed under the microscope) which allows you to calibrate the size of the increments on the graticule (work out the length each of them represents) at different magnifications. There’s a great, concise tutorial here on calibrating the eyepiece graticule, but suffice to say if any readers plan on taking biology at HE you’ll find yourself spending far too much time squinting down microscopes at these!

So then, if our image is nice and magnified, we can observe it in as much detail as we like by simply increasing the magnification, right? Sadly, no. This is where resolution comes into play. Resolution is the smallest point between two objects in view that remain distinct from one another. Basically, a microscope with high resolving power will be able to allow you to distinguish between two very small, very close objects without them being reduced to a blurry mess (think about pixelated images having low resolution). Although we’re now firmly into detail you really won’t need at A-level, for those interested resolution can be calculated as such: resolution = (0.61 x λ) / Numerical Aperture. “More of those funny symbols,” I hear you cry. Fear not, λ (lambda) is another Greek symbol, used to represent wavelength. Numerical aperture refers to the range of angles light can be detected from by the microscope. As an added bonus I’ll also mention penetration depth (insert jokes here) which essentially refers to the ability of the microscope to see deep into the sample being magnified. There is often a trade-off between resolution and penetration depth.

 

History and diversity of the microscope

The very earliest, and very primitive, microscopes were formed by Zaccharias and Hans Jenssen (although this claim has been disputed), who put magnifying glass into a tube and noticed the magnifying effect this had circa 1590. This is the earliest example of the compound microscope, but we’ll get to that later. However, Antony Van Leeuwenhoek is often credited as the inventor of the microscope, who used curved glass to magnify exceptionally small organisms. Around 1670 Robert Hooke (of Hooke’s law fame) added a water flask and barrel to capture and focus light onto the specimen, which was aided by the addition of another lens. Here’s a great link if you’re interested in reading more about the contributions of Robert Hooke to microscopy. I’d also recommend visiting this link to learn more about the history of microscopy in general.

Today, microscopes are generally divided into two main categories – light and election which we’ll examine now in more detail.

Light microscopy

This is the type of microscope you’ll be most familiar with, and will have almost certainly encountered before. The resolution is not as powerful as in electron microscopes, however light microscopes are invaluable in that they allow for the study of live cells. Here are some of the main types of light microscopy:

  • Bright-field microscopy – this is the kind you will be most familiar with, and are frequently used in classrooms. They have an ocular lens (near the eyepiece), an objective lens (close to the object being studied) and a condenser lens (to focus light) as well as a light source.
  • Phase-contrast microscopy – this creates a light phase shift (makes light ‘fall out of sync’) to make light regions appear lighter and dark regions appear darker.
  • Differential interference-contrast microscopy – as above, but using two beams of polarised light.
  • Stained bright-field microscopy – As with bright-field, but using a stain to give pigment to the sample being studied. More on stains later.
  • Fluorescence microscopy – Now we’re onto the good stuff! Cells or specific regions and organelles within them are stained (often with different coloured stains to show different organelles at the same time) with a fluorescent dye. Fluorescent dyes are remarkable in that, when they are stimulated by a beam of light, the wavelength they emit is longer than that which they absorb, giving off beautiful fluorescent colours.
  • Confocal microscopy – As above, but with an enhanced ability to focus the light by using a pinhole system, giving a clearer image.

Electron microscopy

Here lies much greater resolution. However, the trade-off is that these microscopes cannot observe living tissue – or, at least, if the tissue was alive when it went in, the electron beam would soon take care of that. Here are the three main types:

  • Transmission electron microscopes – A beam of electrons passes through the object – parts of which will absorb electrons and appear darker.
  • Scanning electron microscopes – Electrons pass onto the surface of the object, causing other electrons to be reemitted. This technique is only capable of viewing the surface of objects, however it produces very beautiful images. I recommend visiting the website of scientist and artist David Scharf who has transformed SEM into an art form.
  • Freeze-fracture microscopes – involves freezing the specimen in liquid nitrogen and membranes are split, allowing for the observation of membrane proteins. However, this technique is known to produce artifacts (anomalies) in the final image due to the effects of freezing.

I hope this was helpful. Next time I’ll be discussing some of the applications of microscopy, including flow cytometry and staining.

Entry 1: Cell Theory

Edexcel A 3.1; Edexcel B 2.1 (i, ii), relevant background for AQA and OCR A & B.

Cell theory

It may be difficult to believe there was once a time that scientists were unaware that cells are the most fundamental unit of life. It was not, in fact, until 1838 that Schleiden and Schwann (German botanist and physiologist, respectively) first proposed cell theory, which consisted of three main arguments:

  • Cells are the most basic and fundamental units of life
  • Through biogenesis (more on this later), cells come from pre-existing cells
  • Every living organism is composed of at least a single cell

Whilst today many augmentations and additions of these three basic principles exist (often to address the impact of natural selection on cells), these principles still stand today.

There are approximately 60 trillion cells in the human body – each containing 10,000 different types of molecules. In such complex, multicellular organisms these cells are organised into tissues (a group of similar cells which together carry out a specific function), which are organised into organs (a collection of tissues which carry out a specific function(s)), which are organised into the organism.

Cell size

So why are cells so tiny? Cells are often 1-100 micrometres in diameter (0.001-0.1 millimetres), condemning the biologist to eternally squinting down microscopes to observe them (this is particularly not fun for the long-sighted biologist) – however, there is good reason. Cell size is largely confined by the surface area-to-volume ratio. What is meant by this is that the increase in the volume of the cell is not proportionate to the increase in surface area. For example, a cell with a diameter of just 1 μm (‘μ’ is the letter ‘mi’ in the Greek alphabet and in scientific connotation is used to represent ‘micro’. It is also used in statistics to represent the population mean) will have a 6:1 surface area-to-volume ratio. But, when cells get larger this ratio decreases – at 3 μm the ratio is just 2:1.

So why is this an issue? Well the activity of the cell is proportional to its volume – meaning it needs more resources the larger it gets. However, these resources can only be obtained from the surface of the cell, as molecules both enter and exit the cell this way (more on transport across the cell membrane later), and will have a larger distance to move across in cells with greater volume. The result of this is the cell not being as efficient when the surface area-to-volume ratio is smaller, hence their generally small size.

Next up, we’ll look at the history, diversity and uses of the instrument that first allowed the observation of cells – the microscope!

References

Sadava, Hillis, Heller & Berenbaum (2012). Life: The Science of Biology. 10th ed. Sunderland, MA: Sinauer. 78-79.

I’ve been wanting to resurrect this blog for a long time, however I frequently found myself either uninspired or running the risk of self-plagiarising. Recently, I found myself discussing YouTube tutorials over a pint with a friend – how there are so many out there and how I wished someone had made a comprehensive, specification-based series of videos for the subjects I studied at A-level. I had also noticed that an education-based, A-level-relevant post I did a while back still attracts hits (I like to think this is because it’s helping some students out there). Coupled with the recent overhaul of specifications, this gave me the idea of resurrecting my blog and putting it to good use – namely to try and build a specification-relevant series of posts covering A-level biology; but also to go above and beyond it for those most interested, who (like myself at A-level) often feel frustrated by the constraints of mark schemes and specifications. An additional benefit would also be the comments system provides an opportunity for students to ask questions if they are struggling, which may substantiate their understanding. I italicise the topics of the specifications I will be covering at the head of each post.

 

Hopefully, this will prove useful and, in which case, I will continue to create more posts in the future. Please let me know what you think.

Researchers John Cannarella and Joshua Spechler at Princeton University have recently used epidemiological models to predict and describe the inevitable decline in Facebook’s popularity. Whilst 21st Century cynic in all of us may relish in the comparison of Facebook to the bubonic plague, even as someone who has never used Facebook it is difficult for me to deny the unique niche this particular social media parawebsite occupies.

Analogies can, however, be useful to us sometimes simply because they are…well, simple. We were recently set coursework that involved using cars and car parks as an analogy for community ecology. It’s much more efficient to identify makes and models of cars than particular species of grasses; or to mark, release and recapture mobile specimens. Similarly, with over 1 billion unique users in over 210 countries, obtaining a representative sample across Facebook’s demographic will prove difficult. Epidemics specifically could be a useful comparison in the sense that either a cure, immunity or containment will be developed in time. Social phenomenons are fickle and, arguably, ‘immunity’ could represent the novelty of social networking sites wearing off.

However, this analogy fails to account for competition. Facebook once shared a niche with Myspace – a niche which was eventually partitioned, with the latter site being forced to focus on a musical audience instead to prevent itself from being outcompeted. Whilst other social networking sites undoubtedly exist, none occupy the same niche. Instagram and Tumblr are predominantly image-focused, whilst Twitter carries a 140 character limit. Frankly, there just isn’t anywhere for Facebook users to go. The idea of comparing social network ‘immunity’ to disease immunity is also flawed – the latter arises from physiology, whilst the former arises from changing social trends, which are far more difficult to predict and control variables for.

Undoubtedly, the lure of the convenience of Facebook keeps many of its users active, and inevitably a more convenient, user-friendly model will come along one day. I don’t doubt for a second that the time will come for Facebook to be ‘eradicated’ like the plague. However, I simply do not think that the public will ‘jump ship’ if they will simply be jumping overboard.

A-levels

By far the most roller coaster two years of my life, I’m both sad and elated to have progressed from A-level student to an undergraduate student. Never in my life have I felt like so much of an abject failure, nor have I ever felt so proud to have had certain experiences and opportunities (see my previous post my Oxford interview experience) as I have had in the last two years. Whilst I was not lucky enough to have had an offer from Oxford, I’m still fairly pleased to have held my own throughout the interviews, and was fortunate to get into my other university of choice.

I appreciate, however, that others were not so lucky – many of whom deserved better than were awarded. But that’s the nature of exams – they are intrinsically imperfect. The exam itself as a form of assessment does not suit everyone. Different people perform differently under such pressure. Some of the most intelligent people I know absolutely fall to pieces under exam conditions, whereas some people simply get lucky with the topics selected for the exam. Not to mention the fact that subject vary hugely in their methods of assessment to each other – some of my A-levels were strictly essay based, whereas some never required me to write more than six or seven lines at a time.

It also seems to me that, particularly for science-related subjects, exam boards have yet to find a decent way of distinguishing an ‘A’ grade student from a ‘C’ grade student. In my opinion, students shouldn’t simply be assessed on how many questions they can answer, but also the depth of their understanding, i.e. by implementing a sort of ‘tier’ for each grade, and ensuring students satisfy the criteria of each tier to determine their grade. After all, this is how some of the humanities and a lot of BTEC students are assessed.

My advice to current and upcoming A-level students is this: if you have fantastic memory, it won’t be enough, you have to understand the material. If you are naturally gifted, understanding the material won’t be enough – you need to memorise information. However much effort you are putting in, there is no such thing as ‘enough’ effort. And remember, this is going to be the most difficult two years of your life (at least so far), but I promise it will be worth every second – live it, breathe it, love it!

     Many plausible links have been drawn between genetics and various mental disorders (for example, schizophrenia is frequently linked to the NOTCH4 gene; bipolar disorder is frequently linked to the NRG1 gene). However, a conversation with an acquaintance regarding the social stigma they experienced after being diagnosed with a mental disorder got me thinking about the origins of such disorders. Surely such traits would be extremely maladaptive in an increasingly social world.

     Of course, there is some debate as to whether there are causal mechanisms for genetic links to mental disorders, as well as whether the diagnosis of mental illness is valid and reliable. However, for the sake of brevity, let’s assume that this is the case.

     Whilst some cases have been known to simply be the product of de novo mutations (spontaneous mutations not inherited from either parent), this does not support other evidence that indicates a significant level of heritability amongst mental disorders. So, what else could explain the presence of such genes today?

     The explanation could, of course, be far more simple – the modern Homo sapien could have simply been removed from the selective pressures that would have caused mental disorders to be selected against in the first place. Similarly, many such disorders do not present themselves until the individual’s reproductive span has already begun. However, there are some slightly more radical theories appearing today.

     Dr. Peter McGuffin, when asked about the evolution of mental disorders, made an analogy between sickle cell disease and mental disorders. This is because, although sickle cell disease is selected against, it does give its sufferers some resistance to malaria, which increases its frequency in a population. Dr. McGuffin argues that the same could be happening with mental disorders, and found a higher fertility rate amongst the unaffected relatives of depression sufferers.

     Although mental disorders are usually polygenic and not explained by the simple inheritance of a single trait, it’ll be interesting to see what, if anything at all, this research is eventually able to tell us about the way in which such disorders have evolved.

     It’s amazing what a little bit of encouragement can do. I’d always thought of entry to institutions like Oxford and Cambridge as equivalent to smashing impenetrable glass ceilings only accessible to the independent school elite due to their socio-economic status and nepotistic advantage. However, after a positive experience at an open day at Cambridge, as well as encouragement from those around me, I decided to roll the dice and apply to the University of Oxford to read Biological Sciences.

Thirteen days before the Biological Sciences interviews were due to take place, I saw an email from Oxford on my phone. Wondering why the insensitive bastards had decided to reject me the day before I had another interview at my second choice university (I was already convinced that rejection was inevitable based on my less than stellar GCSE results), my heart instantly began to palpitate once I saw that I had, in fact, secured an interview at my preferred college, St. Anne’s. I thought I’d write about my preparation and experience there, as it may be helpful to anyone thinking of applying for entry in 2014.

Although there is really no way to be completely prepared for an Oxbridge interview, one of my lecturers began to look at sample questions, and ran me through a mock interview, which was immensely helpful at giving me the ‘first taste’ of the format of the interview. Although my performance was abysmal it at least gave me an idea of what the worst-case scenario would be. We looked at a number of broad topics relating to Biological Sciences, as well as looking back over my personal statement, as there was every chance that questions could come up based on this at any time. My lecturer was also able to deduce who was most likely to interview me (she was 100% right!), and so we were able to look into his interests and background.

Privately, I watched the interview videos on the Oxford website. My advice to readers would be to watch the sixth video, in which the candidate gets a question wrong on the topic she claimed to be especially keen on in her personal statement. This highlights just how important it is for potential candidates to re-read your personal statement and make sure you know your stuff! Don’t let that be you! I also made sure that I had answers prepared for general questions such as “Why do you want to study at Oxford?” and “Why do you want to study Biological Sciences?”. I also found this Youtube playlist made by a current student quite helpful.

The only information given prior to the interview to us was the time we were required to be there, and that we would have to stay there from the Monday until the Tuesday, and may be required until the Wednesday (accommodation is provided free of charge throughout). Upon arriving at St. Anne’s, we were greeted by very festive students who were actively engaging with the interview candidates to ease their nerves. The room we waited in has sofas and bean bags wall-to-wall, and animated films were constantly playing on a television. The whole experience was extremely informal, and it felt more like a party than a serious academic interview at one of the most prestigious universities in the world, although I have heard that other colleges were a lot more formal around the interview period. A board was present that had our interview times and the names of our interviewer on. It became immediately obvious that all of the Biological Sciences interviews were at different colleges the following day, mine being at St. Hilda’s. We were not made aware of who was interviewing us on the second day (although I managed to figure out who mine would probably be by checking the St. Hilda’s website).

My name was eventually called for my first interview and a third year student walked me to the room that the interviews were being held in. The previous candidate told me that I had nothing to worry about on her way out and, after a period of discussion, the interviewers called me in. Whilst I had expected to be greeted by a stern, tweed-clad toff, I was pleasantly surprised to find that my interviewers were fun, encouraging and, above all, enthusiastic. There were two interviewers for each interview, and the general pattern seemed to be that one would take notes while the other asked questions, and occasionally they would swap. The questions I was asked varied greatly – they began by asking about my personal statement, and we got into a rather deep conversation concerning intelligence. I was then shown a graph and tree trunk and asked to explain what was happening, and I was also asked to describe the adaptations of some fish in a small tank in the room. I found that if I wasn’t telling the interviewers exactly what they wanted to hear, they wouldn’t sit in silence until I got a question correct, they would instead give hints to push me in the right direction.

Much of the afternoon was spent talking to the other St. Anne’s Biological Sciences interviewees. There were only seven of us (although more students were coming in from other colleges to be interviewed at St. Anne’s), and we all found out a bit about each other’s background. I was quite shocked to discover that I was the only state school applicant there, and whilst I was previously fairly happy with my GCSE results, I realised that I had a lot to prove at Oxford compared to the 11 A* applicants I was up against!

Whilst all of us had had fairly similar first interviews, the second interviews were somewhat shrouded in mystery as we were sent to different colleges (me to St. Hilda’s, three to St. Catherine’s, one to St. John’s one to St. Hugh’s and one to Christ Church) and did not know our interviewers. My interview was one of the later ones, so I spent most of the morning brushing up on things I thought might come up. Most of the other applicants returned looking shell shocked from their second interview, and felt that they did not go as well as the first. However, I should probably note that I did not meet a single current student there who didn’t claim to have a ‘bad’ interview. All of the second interviews appeared to follow completely different formats, whereas, much of the first interview was the same for all of us.

When my name was called for the second interview, a third year student walked me from St. Anne’s to St. Hilda’s and informed me that the latter was very similar in terms of atmosphere and attitude to St. Anne’s (much to my relief!). I didn’t, however, get to experience much of St. Hilda’s, unfortunately, as I was whisked to my second interview pretty much immediately after arriving. The interviewer for my second interview was, again, very enthusiastic and approachable. However, the interview was far more rigorous and scientific than the first but, in a way, the more objective approach was comforting as I was certain when I was getting the answers correct. Much of the interview centred around a graph of genome sizes, and some fairly fundamental questions about cloning and conservation.

With my legs having turned to jelly by the end of my second interview, I headed into town for a bit of comfort food before heading back to St. Anne’s. Just before the final interview was about to start, all of the Biological Sciences students were told they could leave, and so my key was handed back in at the Porter’s Lodge, and I was on my way home, feeling that I’d done my absolute best, and that I’d gotten the most out of my experience. The outcomes of our interviews are being sent on the 10th January 2013, and no matter what the letter says, I’ll always be proud of the fact that I had an interview.

I hope this been helpful to anyone thinking of applying to Oxbridge for 2014 (or later) entry. If you have any questions please do not hesitate to get in contact.

Global warming is one of the most controversial issues in science today. Public opinion remains divided as to the exact role of humans regarding climate change, with 48% of Americans believing global warming is exaggerated.[1] Although many arguments have been put forward to diminish anthropogenic global warming,  some of the most commonly cited issues include: the perceived lack of scientific consensus regarding climate change; the argument that anthropogenic global warming is not, in itself, bad; that global warming results from solar activity, not human interference; and that the climate has always changed, and so current changes do not necessarily result from human activity.

Figure 1. Carbon dioxide levels and temperature from 450,000 BP.[2]

Whilst it is true that the climate has constantly changed throughout history, this does not necessarily negate the argument that global warming is anthropogenic. Figure 1 shows us that carbon dioxide levels and temperature certainly have been higher in the past compared to present-day, and this has been used to suggest that anthropogenic global warming is a myth, otherwise we would not have seen high temperatures and abundances of carbon dioxide prior to industrialisation. However, this argument could be countered scientifically, because the climate simply responds to factors that cause it to change – the prevailing factor in modern times being human activity. In the past, the temperature has risen for varying reasons. Periods of warming following ice ages (including the last ice age, which ended 10,000 years ago and is evident in Figure 1) have lead to increased global warming, alongside changes in the Earth’s surface (the most notable of which, the Palaeocene-Eocene Thermal Maximum, occurred 55 million years ago and is widely associated with global warming).[3] Therefore, whilst there has always been climate change, there have been significant past events that have caused it, and it does not necessarily negate the role of humans in present-day global warming.

It also frequently argued that, rather than being anthropogenic, global warming is caused by solar activity. This is because, in the past, there have been periods where solar irradiance and climate change have positively correlated.

Figure 2. Global temperature and total solar irradiance.[4]

However, as Figure 2 shows, solar activity has declined since 1980, whereas temperature has increased steadily since 1950, suggesting that there is no correlation between solar irradiance and global warming at all. Therefore, the argument that solar activity, rather than humans, causes global warming can be countered scientifically by looking at statistical information regarding solar irradiance and temperature change.

Furthermore, one of the primary arguments made against anthropological climate change is that the positive repercussions of global warming far outweigh then negative repercussions. The evidence presented for this focuses on the harsh climate that the Dark Ages and similar time periods faced, in which disease and frost was rife. It is often thought that countries at higher latitudes will benefit from global warming, namely countries within the Arctic circle, such as Canada and areas such as Siberia, who would seemingly benefit from having a warmer climate. However, this could be countered scientifically by explaining that such areas often have poor soil quality (and, hence would not necessarily benefit from warmer conditions).[5] Moreover, the melting of polar ice caps that results from global warming results in the loss of many habitats for organisms the world over, and so would not have positive consequences for the environment.

Lack of scientific consensus regarding global warming is also frequently given as a reason for believing global warming is not anthropogenic. The Global Warming Petition Project boasts over 31,000 signatures from American scientists, all of whom believe that there is insufficient evidence to conclude that climate change is caused by human activity. However, of the 31,000 signatures, a mere 39 are climatologists (0.13%),[6] suggesting that while there may be some lack of consensus regarding anthropogenic global warming, it would seem there is very little among scientists whose area of expertise concerns climate change.

It is difficult for scientists to prevent skepticism regarding anthropogenic global warming as they are unable to prevent media coverage which diminishes or denies the role of humans in climate change. However, as scientists have an increasing wealth of statistical evidence to support anthropogenic global warming, it is my opinion that facts will eventually outweigh fiction.

References

1. Goldenberg, S. (2010). Nearly half of Americans believe climate change threat is exaggerated. Available: http://www.guardian.co.uk/environment/2010/mar/11/americans-climate-change-threat. Last accessed 24th September 2012.

2. Gore, A (2006). An Inconvenient Truth: The Planetary Emergency of Global Warming and What We Can Do About It. New York City: Rodale Press.

3. Natural History Museum. (2012). What Causes Climate Change?. Available: http://www.nhm.ac.uk/nature-online/environmental-change/what-is-climate-change/climate-change-causes/index.html. Last accessed 24th September 2012.

4. Skeptical Science. (2011). Solar activity & climate: is the sun causing global warming?. Available: http://www.skepticalscience.com/solar-activity-sunspots-global-warming.htm. Last accessed 24th September 2012.

5. Skeptical Science. (2010). Positives and Negatives of Global Warming. Available: http://www.skepticalscience.com/global-warming-positives-negatives.htm. Last accessed 24th September 2012.

6. Global Warming Petition Project. (2012). Qualifications of Signers. Available: http://www.petitionproject.org/qualifications_of_signers.php. Last accessed 24th September 2012.

Throughout history aggression has been the cause of much sorrow for humanity, often resulting in institutionalisation and social ostracisation for the aggressor; and trauma or even (in extreme situations) fatal injuries for the victim. The Oxford English Dictionary defines aggression as ‘feelings of anger or antipathy resulting in hostile or violent behaviour; readiness to attack or confront’. The prevention of aggressive behaviour has been the subject of extensive research – but to prevent aggressive behaviour, we must first understand the cause. Some theories put forward to suggest the cause of aggressive behaviour focus on biological causes, which include genetics, hormonal mechanisms and neural mechanisms. Other theories suggest that aggression is, in fact, caused by social and environmental factors, such as deindividuation and social learning theory.

Social learning theory (SLT) is commonly proposed by behaviourists to explain aggression. SLT suggests that all behaviour, including aggression, is learned either by direct or vicarious experience. A person can learn through direct experience, according to SLT, if their behaviour is reinforced, for example, if violent behaviour leads to positive outcomes (or ‘rewards’) for the aggressor, they may be encouraged to do it again. Moreover, aggressive behaviour can be learned vicariously through observation, for example, a child seeing and then imitating violence.

Bandura et al. 1963 attempted to prove how social learning theory can explain aggression in their experiment. They divided nursery school children into three groups, all of whom saw a video of an adult demonstrating aggressive behaviour towards a Bobo doll. In the first condition, the film ended there. In the second condition, the children were shown an additional video of the adult being rewarded for his aggressive behaviour. In the third condition, the adult was seen to be punished for his aggressive behaviour. The children were then filmed with the Bobo doll. Those in the first and second condition were markedly aggressive towards the doll, with the second condition being significantly more aggressive than the first, whereas, the children in the third condition tended to not be aggressive towards the doll. This demonstrates behaviourist psychologist B.F. Skinner’s principles of operant conditioning (learning by reinforcement), as the children who behaved with the most aggression were those who observed the adult being ‘rewarded’ for his behaviour, and the children who behaved with the least aggression were those who observed the adult being ‘punished’ for his behaviour. Bandura’s experiment also demonstrates learning vicariously, as the children in the first conditions were aggressive after merely observing the same behaviour.

Social learning theory, therefore, can offer us a plausible explanation of aggressive behaviour. However, it presents act of aggression as simple mimicry, which provides us with no understanding of exactly how the behaviour is learned. Moreover, it places no control over aggressive behaviour with the individual, instead, an external source must be present to create the aggression. Social learning theory can, therefore, offer a valid explanation as to why aggression occurs, but not how it can be avoided by the individual.

Social psychologists have proposed deindividuation as an explanation for aggressive behaviour. When deindividuation occurs, people lose their sense of identity and socialisation. As what is, and what is not, acceptable in society results from socialisation, when our sense of this is lost, social psychologists suggest that it can lead us to act in a more ‘primitive’ manner, which is where aggression occurs. Moreover, putting on a uniform or being part of a very large crowd can cause us to lose our sense of individuality, as we now feel that we are simply part of a group, and not an autonomous being and, subsequently, we are more susceptible to committing aggressive actions.

Anthropologist Watson (1973) hypothesised that tribal warriors who changed their appearance significantly (and who where, therefore, the most deindividuated), would be more likely to kill and torture their opponents than those who did not. He studied 23 societies, 13 of whom were especially brutal. In these 13 societies, all but one changed their appearance and were, subsequently, deindividuated, whereas, in the 10 less violent societies, seven on the ten societies did not change their appearance, and so had not undergone the process of deindividuation.

Deindividuation has many merits as an explanation for aggression. It can be seen in everyday society, such as in football hooliganism and rioting, whereby, people may feel more able to commit violent acts as they are simply part of a crowd. However, deindividuation can be criticised as an explanation for aggression. Social psychologists suggest deindividuation occurs within a person is in a crowd, as it removes their sense of identity and, therefore, causes them to act in an antisocial manner, although, it is group behaviour that usually encourages conformity and social behaviour, suggesting that the very same scenarios that cause us to learn norms and values also strip us of them.

Membership of certain institutions has also been blamed for aggressive behaviour, particularly prisons. The importation model, put forward by Irwin and Cressey (1962) suggests that violence occurs within prisons because the type of person most likely to enter such an institution is predisposed to violent behaviour within that institution due to the experiences they have had in life that put them there. The deprivation model turns the importation model on itself, and suggests that the institution itself is most to blame for escalating violent behaviour due to the frustration resulting from the loss of personal freedoms inmates experience whilst there.

One of the most controversial experiments in the history of psychology, the Stanford Prison Experiment (Zimbardo et al. 1973) demonstrated institutional aggression. For the experiment, Zimbardo created a mock prison and recruited students to assist. Some of these students were told to play the role of guards, whilst the others were told to act as prisoners. Both groups were given relevant uniforms and the prisoners were referred to only by their assigned number. The prison setting, along with the deindividuation of the inmates and guards upon entry, caused the experiment to become violent, so much so that the experiment was terminated after six days out of the intended fourteen.

Institutionalisation into prisons is a possible explanation of aggression. However, it does not explain the aggressive behaviour that initially leads to institutionalisation, much like the importation model suggests. Moreover, it does not explain why aggression exists outside such institutions – not all violent actions are reported to the police, and those that are do not necessarily result in prison sentences.

Biological psychologists offer alternative explanations of aggression to social and behaviourist psychologists. Instead of pointing towards the environment an individual is in as the cause for aggression, they instead claim that violence can stem from genes, hormonal mechanisms and neural mechanism. The neurotransmitters dopamine and serotonin in particular have been linked to aggression when levels of the former are high, and levels of the latter are low. Dopamine has been linked to aggression due to it’s association with pleasure. It is the neurotransmitter stimulated after eating certain foods or sexual intercourse, and has been found to become more abundant after violent behaviour. Therefore, the ‘reinforcing’ nature of dopamine could cause violent behaviour. Increases in dopamine activity have also been shown to increase aggression, as evidenced in the use of amphetamines, which stimulate dopamine. Serotonin’s function in the brain is to inhibit the firing of other neurons, especially in the prefrontal cortex, which is the area in our neuroanatomy responsible for cognitive reasoning and social behaviour, among other things. It is the area where our morals are reasoned and the consequences of our actions are considered.

Evidence for the roles of serotonin and dopamine in aggressive behaviour comes from Ferrari et al. (2003). In their experiment, they allowed a rat to fight at the exact same time every day for ten consecutive days. On the eleventh day, at the time the rat was due to fight, they measure its dopamine and serotonin levels, and found the former to be high, and the latter to be low. This demonstrates that the mere anticipation of a fight was enough for the rats adjust its brain chemistry.

Neural mechanisms have an undeniable link to aggression. However, it is difficult to determine causation – if that rat in Ferrari’s experiment changed its brain chemistry in anticipation of a fight, it may be that this is a physiological adaptation, designed to make the rat more likely to survive, rather than the rat being aggressive in the first instance due to the varying levels of neurotransmitters. Therefore, it could be aggression causing high levels of dopamine and low levels of serotonin, rather than neural mechanisms causing aggression.

Hormones such as testosterone and cortisol are also frequently linked with aggressive behaviour. Although the role testosterone plays in aggression is not absolutely clear, it has been suggested that it can inhibit activity in the orbitofrontal cortex in the brain, which controls impulsivity, causing the aggressor to have reduced control over their actions. Inversely, the hormone cortisol is shown to combat aggression and, therefore, insufficient levels of cortisol can cause aggressive behaviour. Cortisol is released into the blood stream as part of the body’s response to chronic stress, and assists the regulation of glucose levels in the body, which has a calming effect. If an inadequate amount of cortisol is released, a person’s body may fail to stabilise its sugar levels when exposed to stress, which can cause them to become aggressive.

The impact of hormonal mechanisms on aggressive behaviour is supported by the research of Book et al. (2001) who found a correlation of +0.14  between testosterone and aggression after carrying out a meta-analysis of 45 studies. Similarly, a study performed by McBurnett et al. (2000) found that boys with behavioural difficulties were three times more likely to display aggressive symptoms if their cortisol levels were low.

While there is some evidence in favour of hormonal mechanisms as an explanation for aggression, there is a lot of contradictory evidence which suggests otherwise. Kreuz and Rose (1972) studied prisoners, some of whom were known to be frequently involved in fights, whilst others were not. They found no significant difference in testosterone levels between the prisoners who fought, and prisoners who did not, suggesting that testosterone does not play an important role in aggressive behaviour. Similarly, Gerra et al. (1997) found a positive correlation between cortisol and aggression, which suggests that cortisol may not have a subduing effect on aggression, and instead it may aggravate the problem.

Behavioural genetics is also frequently put forward as a biological explanation for aggression. A lot of research involving genetics utilises twins, as this allows the researcher to determine whether a person’s genotype has caused aggression, or if it was caused by interaction between genes and the environment. Research has also been focused on adopted children with at least one biological parent known to engage in aggressive or antisocial behaviour, to determine whether they developed the same traits as their parents, irrespective of growing up in the same environment as them.

McGuffin and Gottesman (1985) studied aggression and compared monozygotic (identical) twins and dizygotic (fraternal) twins and found a concordance rate of 87 percent for aggression in monozygotic twins, compared with 72 percent in dizygotic twins. As monozygotic twins share all of their genes, whereas dizygotic twins only share half of their genes, this suggests that genes play a large role concerning aggressive behaviour. Similarly, Hutchings and Mednick (1973) studied 14,000 adopted children in Denmark and found a positive correlation between convictions for violent behaviour amongst biological fathers, and convictions for violent behaviour amongst their sons, who had been adopted. This is evidence for the role of genes in aggression, as the adopted sons engaged in violent activity, despite being removed from their violent fathers.

Whilst there is obviously strong evidence supporting genetics as an explanation for aggression, it is unlikely that a person’s genotype alone is responsible for aggressive behaviour – instead, it is more likely that aggression arises from interaction between genes and the environment. The gene monoamine oxidase A (MAOA), in particular, has been the focus of research regarding gene-environment interaction and aggression. MAOA is otherwise known as the ‘Warrior gene’ and is known to positively correlate with violent behaviour. Caspi et al. (2002) found that children who had been mistreated, and who possessed the MAOA gene, were more likely to express that gene (and, thus, engage in aggressive behaviour) as a result, whereas, children who possessed the gene but were not abused were less likely to express the gene and become aggressive.

It is frightening to think that there is a possibility aggression may be written in our DNA and biochemistry, and that we have little ability to exercise free will to combat it. However, I don’t believe we can categorically conclude that aggression is caused by biological factors alone. Similarly, I don’t believe deindividuation, social learning theory or institutionalisation universally explain the cause of aggression. Whilst they all have merits as theories, and whilst it is evident that genetics and physiology can influence aggressive behaviour, I believe that the human experience is far too broad and full of trepidation for there to be one sole cause for aggressive behaviour.

Denisovan Girl

I previously wrote about my interest in the Human Genome Project and the possibilities it could bring. As some of you may have seen on the news lately, those possibilities were realised when scientists were able to map the entire genome of a fossilised Denisovan girl from a finger bone using single-strand DNA sequencing. The remains were found in 2008 in a Siberian cave by members of the Institute of Archaeology and Ethnology of Novosibirsk, and were carbon dated to approximately 40,000 BP, although different publications specify the date as being much older.

The Denisovans were initially thought to have bred with Neanderthals, although the University of Cambridge has disputed this and said that the DNA crossover is more likely to be the result of shared ancestry and not interbreeding. However, the Denisovans did successfully breed with the ancestors of modern-day humans, which has resulted in DNA crossovers with Melanesians and Australian Aborigines (as much as 3-6%, depending on the source).

With my foremost interests being in neuroscience and genetics, what I found to be most fascinating is what the Denisovan girl has taught us about the evolution of the human brain. At least eight genes associated with nerve growth (including SLITRK1, KATNA1, ARHGAP32 and HTR2B) have emerged since the time of the Denisovans. Other conserved genes associated with language development (ADSL, CBTNAP2 and CNTNAP2) were present but have undergone changes within modern-day humans. These genes are thought to be among the causes of autism if they mutate, which also suggests that there may be a link to the emergence of empathy within humans. The appearance and alterations of these genes demonstrates how far our complex cognitive processes have come since the time of our ancestors.

Although autism is not thought to result from the mutation of single chromosomes, this research could certainly give rise to some much-needed breakthroughs for a disorder that can be particularly isolating and heartbreaking.