Thursday, February 19, 2009

Evolutionary Trade-Offs:Sickle Cell Disease and Malaria

Evolutionary trade-offs in performance from one environment to another have long been thought to be essential in the balance in population and distribution of organisms. It is an essential concept in natural selection. (The process in nature by which, according to Darwin's theory of evolution, only the organisms best adapted to their environment tend to survive and transmit their genetic characteristics in increasing numbers to succeeding generations while those less adapted tend to be eliminated.) An advantageous mutation in a certain environment will increase the likelihood of reaching the age of reproduction, allowing passing on of the gene to the next generation, therefore a positive mutation. If the mutation is detrimental to health and the person dies before reproduction, the mutation is lost and is therefore a negative mutation. Natural selection has to be considered in the context of "pre-modern" societies. Modern medicine has altered the balance of nature and often allows us to rescue people who otherwise would die of their condition, for example, juvenile diabetes; a hereditary disease which used to cause death in childhood. Now due to medical advances it is now possible to treat juvenile diabetes therefore natural selection is no longer a problem. Other than modern medicine, natural selection would suggest that no hereditary disease would be passed on to the next generation. This is in fact not true, this is when evolutionary trade-offs occur, where pros and cons exist for a mutated gene, providing enough advantage to pass on the gene to the next generation.
One evolutionary trade-off which has caused much interest in the medical world of late is the genetic trade-off evolving amongst people who carry genes for sickle cell disease, and the protection they have from malaria.
Malaria is one of the most common infectious diseases, each year, attacking 400 million, in which 3 million develop the illness. The disease is caused by transmission of plasmodium parasite usually by being bitten by an infective female Anopheles mosquito. Only four types of the plasmodium can infect humans, the most dangerous being P.falciparum (the other three being P.vivax, P.ovale, P.malariae).
The plasmodium parasite spends part of its life in the mosquito and part in the human host. The infective plasmodial sporozoites enter the bloodstream from the saliva of the feeding female anopheles mosquito. The Kupfer cells which are part of the reticulo-endothelial system in the liver clear the sporozoites from the blood stream and kill many of the organisms. A fraction of the sporozoites escape destruction however, and penetrate the hepatocytes of the liver where they multiply. The parasite once in the hepatocyte transform into a new entity; schitzonts which replicate vigorously forming merozites which totally fill the hepatocytes. Rupture of the hepatocyte membrane causes a release of merozoites into the bloodstream where they invade circulating erythrocytes where they assume a ring form called trophozoites. These organisms consume haemoglobin, enlarge and metamorph into schizonts and merozoites. Eventually the erythrocytes lyse and release merozoites that can penetrate new erythrocytes. Trophozoites can also form gametocytes in the erythrocytes which is the sexual form of the parasite which stay in the red cell. Transmission of the parasite into a feeding mosquito occurs when these gametocytes are taken in. The sexual reproduction cycle then begins in the mosquito, which subsequently leads to human transmission.
There are many points in the life cycle where the parasite could be targeted for destruction. Firstly the antibodies and lymphocytes called ‘natural killer cells’ attacking the sporozites when first injected into the blood stream. Prior exposure to the parasite would cause a stronger more effective immune attack due to conditioned lymphocytes. Another point is the intrahepatic phase of malaria. Potentially a mutation to the structure and function of hepatic cells that could kill the parasite or slow its growth would prove effective, although none is known. The last phase of the parasite’s life cycle that the body could attack is at the red cell invasion and multiplication phase; where a mutation could destroy infected cells and parasites allowing replacement by non-infected red blood cells which would potentially eliminate the malaria parasite. At this phase the mutated sickle haemoglobin (haemoglobin S) proves effective in impairing malaria growth and development when it is in its heterozygous state.
Sickle cell trait provides a survival advantage over people with normal haemoglobin in regions where malaria is endemic for example in West Africa. The first signs of the relationship between carriers of the mutant hemoglobin S and their protection against malaria, was determined due to the realization that the geographical distribution of the gene for haemoglobin S and the distribution of malaria in Africa virtually overlap. Sickle haemoglobin provides the best example of a change in the haemoglobin molecule that impairs malaria growth and development.
The mechanism in which it does this is unsure, but there are many circulating ideas.
In 1970 it was suggested by Luzzatto, et al that it was due to low oxygen tensions causing cells with the haemoglobin S to sickle. When the erythrocyte is then infected by the parasite P.falciparum it deforms due to its high metabolic rate therefore reducing the oxygen tension within the erythrocytes. Deformation of sickle trait erythrocytes would mark these cells as abnormal and then be removed from the circulation and then destroyed in the reticuloendothelial system by macrophages.
Others suggest that malaria parasite could be damaged or killed directly in sickle trait erythrocytes impairing on its proliferation. Ultrastructural studies showed extensive vacuole formation in P. falciparum parasites inhabiting sickle trait red cells that were incubated at low oxygen tension, suggesting metabolic damage to the parasites (Friedman, 1979).
Other investigations suggest that oxygen radical formation in sickle trait erythrocytes retards growth and even kills the P. falciparum parasite (Anastasi, 1984). Sickle trait red cells produce higher levels of the superoxide anion (O2-) and hydrogen peroxide (H2O2) than do normal erythrocytes. Each compound is toxic to a number of pathogens, including malarial parasites.
Sickle cell disease in its homozygous form can prove fatal, and it is likely that a person suffering will not reach the age of reproduction, therefore the genes will not be passed on, and its advantage in protection against malaria will not be seen. This means that negative selection exists for sickle cell disease. However it takes two copies of the mutant gene, to give someone the full-blown disease. Heterozygote carrying one mutant allele and another normal (sickle cell trait) will see a heterozygous advantage, because of the protection against malaria. This advantage is known as a balanced polymorphism where the heterozygote for two alleles of a gene has an advantage over either of the homozygous forms. However this does allow transmission of the mutant gene. Two heterozygote states for the allele mating would introduce a possibility to reproduce an offspring with the disease. The survival rates of the heterozygote state will be greater than the homozygote form, therefore increasing the chances of an affected offspring.
Increasing knowledge in evolutionary trade-offs that occur in pathogens have been used as the basis of new medical treatments. One example of an evolutionary trade-off that is the basis of a form of treatment for the disease is HIV. Human immunodeficiency virus is one of the fastest evolving entities known, making it very difficult to treat. Resistant strains of the HIV evolve when exposed to antiretroviral drugs, due to accumulating lots of mutations during reproduction. However, gaining the advantage of growing resistance to antiretroviral drugs, does come with its negative effects. If you place a resistant and non-resistant organism in head-to-head competition in the absence of the pesticide or drug, the non-resistant organisms generally wins. This theory is the basis of a new dosage regime for the drug.
If a patient has developed a resistant strand of virus to a drug, then the patient stops taking the drug, evolution theory suggests that the viral load will evolve back to a non-resistant strain due to natural selection. If the patient then takes a very strong dose of the same drug, it could effectively retard the replication of those non-resistant viruses, reducing the viral load to very low levels.
This therapy has shown early, promising results, although its efficacy is low and it may not eliminate HIV, but it could slow down the progression of the disease. The treatment however has laid a grounding for more research, and proves that we can impinge on these evolutionary trade-offs and take advantage of them in medical treatment.

The Evolution of Lactose Tolerance

In the world today, the majority of adults are lactose intolerant. In certain populations, however, the exact opposite is true. In this essay I will consider the evolutionary explanation for this.

At birth, all humans produce the enzyme lactase. Lactase hydrolyses the disaccharide lactose into galactose and glucose monosaccharides. In humans, lactase is present mainly along the brush border membrane of the enterocytes lining the villi of the small intestine. It is essential for digestive hydrolysis of lactose in milk, as lactose itself cannot be directly absorbed into the bloodstream at any point along the gastro-intestinal tract. In non dairy consuming societies, lactase production usually drops about 90% during the first four years of life (after the weaning phase is over), although the length of this period varies between individuals. This is due to the ‘switching off’ of the lactase gene. This used to be true of the human race in general, but in dairy consuming areas, a mutation in the gene regulating the ‘switching off’ of lactase production, situated on chromosome 2, has now become very common. Such a mutation is known to have arisen among an early cattle-raising people, the Funnel Beaker culture, who lived in north-central Europe around 5,000 to 6,000 years ago. This lactase-persistence allele is found in more than 90 per cent of Danes and Swedes, and 50 per cent of Spanish and French – illustrating that the mutation becomes progressively less common in Europeans who live at increasing distances from the ancient Funnel Beaker region. The mutation is rare in non-pastoral communities such as the Chinese (only 1 per cent of the population have it). In pastoral areas in East Africa, there is a very low frequency of this allele, although many adults are lactose tolerant. An international team of researchers has found that lactose tolerance in East African adults is served by three newly discovered variants of the lactase gene, all of which are independent of each other and the European strain. As in the European allele the mutation is present in the control region of the gene. These African variants appear to have arisen several thousand years later than the European allele. This fits well with archaeological evidence showing that pastoral farming groups from the north reached northern Kenya around 4,500 years ago, and southern Kenya and Tanzania about 3,300 years ago. In both Europe and Africa, the mutations have arisen after a long period of sustained pastoral lifestyle. The mutations have simply developed differently in the two different areas.

There was at first much debate as to whether the consuming of dairy produce caused the increased frequency of the lactase-persistence allele – called the culture-historical hypothesis – or if in fact that a dairy diet was introduced after the mutation became more common – the reverse cause hypothesis – (very much a chicken/egg deliberation). An analysis of DNA samples taken from skeletons of early European farmers shows that the lactase persistence allele was not present, thus supporting the culture-historical hypothesis.  This means that an increase in dairy farming caused the prevalence of the lactase-persistence allele to gradually increase over the generations. This also means that the current high prevalence of the allele must be due to natural selection. In ‘The Origin of Species’, Darwin explains natural selection thus: ‘Owing to this struggle to life, any variation, however slight and from whatever course proceeding, if it be in any degree profitable to the individual of any species... will tend to the preservation of that individual, and will generally be inherited by its offspring. The offspring, also, will thus have a better chance of surviving, for, of the many individuals of any species which are periodically born, but a small number can survive. I have called this principle, by which each slight variation, if useful, is preserved, by the term of Natural Selection, in order to mark its relation to man’s power of selection.’ In this instance, once dairy farming had developed in Europe, the increased exposure to milk (and the benefits associated with drinking milk – see later) meant that the few people who had the mutated, lactase-persistence allele were more likely to survive, therefore more likely to mate and thus more likely to pass on their alleles to their offspring. Gradually over many, many generations this has increased the allele frequency and totally reversed the ratio of lactose intolerant:tolerant in the areas of the world with a significant pastoral history.

This suggests that having a lactose tolerance is an evolutionary advantage, where milk is readily available. In areas such as China, where milk and dairy products are not widely consumed, the mutated allele remains in a very small minority. There is much speculation as to why this lactase-persistence allele is so advantageous. Milk is uncontaminated by parasites, unlike stream water, making it a safer drink. Also, if those that were intolerant of lactose tried to drink the milk, they would develop diarrhoea and vomiting – this could be lethal in difficult living conditions and they could therefore die of dehydration in the most extreme cases. Another suggestion is the benefit of having a continuous supply of milk as opposed to seasonal crops – cows will give milk all year round whereas crops can only thrive at certain times in the year. Also, milk has many nourishing properties – it is high in fat and calcium, amongst other nutrients. All in all, the ability to drink milk gave some early Europeans and East Africans a big survival advantage. As a result, 30 per cent of European adults are still lactose intolerant (i.e. do not have the lactase-persistence allele). Of these people, 24 per cent actually have secondary lactose intolerance as a result of coeliac disease. In coeliac disease, the body produces antibodies to the gluten in wheat products. This causes inflammation and damage to the villi in the small intestine, especially to the apical region (where the concentration of lactase is highest). Thus a sufferer of coeliac disease could be temporarily lactose intolerant for a few years (eventually, with a gluten free diet, the patient’s gut is able to fully heal). This could mean that the percentage of the population without a mutation for lactose persistence could be even lower than 30 per cent.

In conclusion, the evolution of lactose tolerance is a very good example of natural selection that has occurred in certain populations in different parts of the world in relatively recent times. Lactose tolerance is caused by an inherited mutation that causes lactase production to continue throughout adulthood.

Why do humans reproduce sexually?

The evolution of sexual reproduction is not widely discussed or questioned in non-academic circles, it is taken as a given that humans, and the majority of species throughout time, reproduce through the fusion of gametes. Yet it is uncertain whether this is the most efficient and secure way of continuation of our species. Asexual reproduction is accepted as the most rapid way to expand a population from one generation to the next, as described by John Maynard Smith,[1] so why do humans not reproduce in such a way? Many theories have been put forth for all sexually reproducing species, some only apply to certain species and many explanations intertwine with one another. One justification is the increased capacity for genetic variation to occur, which links closely to the hypothesis that sexual reproduction is primarily in order to resist parasites. Both are countered by the suggestion that the occurrence is necessary to remove deleterious genes or is an adaptation to substantial chromosomal damage and mutation. These theories have been combined along with my own thoughts in order to ascertain in my own mind why Homo sapiens have evolved to reproduce sexually.

Charles Darwin, the father of modern evolutionary thinking, stated that the advantage of sex is “the offspring of two individuals, especially if their progenitors have been subjected to very different conditions, have a great advantage in weight, constitutional vigour and fertility over the self fertilized offspring from either one of the same parents.”[2] Thus he suggests that a genetically superior individual is created by two sexes, rather than asexually. This supports the hypothesis of repair and complementation, or hybrid vigour, where genetic recombination and outcrossing are adaptations to chromosome damage and mutation. It is suggested that recombination during meiosis (and thus sexual reproduction) is primarily required in order to repair damaged DNA. However, this contradicts the widely accepted suggestion that recombination evolved primarily to increase genetic variation between parent and offspring. Therefore if Darwin were still to maintain the significance of recombination in the evolution of sexual reproduction then his theory would have to move more to support the view that sex chiefly occurs in humans in order to maintain variation. Nevertheless, as will be pointed out later, it is questionable how vital variation is to specifically the human race. In addition, it has been put forth by the likes of Elshel that recombination disrupts positive combinations of genes more often that it creates them.[3] Consequently there are evident flaws in Darwin’s suggestion and the repair and complementation hypothesis, particularly relating to the original purpose and importance of recombination during meiosis.

Darwin’s thought that the birth of a genetically stronger human is the main reason for the evolution of sex points to the ability to survive, and thus can be linked with the Red Queen Hypothesis,[4] where the fundamental advantage of sexual reproduction over asexual reproduction is the resistance to parasites. This hypothesises that the underlying purpose of sexual reproduction is to allow enough genetic variation for evolution to occur quickly enough to maintain a population despite the evolution of its parasites. This seems to suggest that sexual reproduction is not only an outcome of evolution, but also the cause of continued evolution once it manifested itself in our ancestors. Ridley’s theory is currently the most dominant in terms of support from leading academics, and can be applied directly to human evolution, convincingly arguing why asexual reproduction does not occur in Homo sapiens.

The basic purpose of natural selection seems to be survival. This depends on the ability to adapt to a change in the environment, usually through an increase in variation and the size of a species’ gene pool. Thus this statement seems to relate to the genetic variation that is supposedly necessary in the Red Queen Hypothesis. This opinion is also directly supported by Weismann’s original theory that the advantage of sexual reproduction is variation between siblings.[5] Ghiselin supported this with the Tangled Bank Hypothesis[6] that suggests an assorted population of siblings may be able to extract more food from their environment. However, this poorly applies to humans due to the relatively few siblings that are produced throughout our evolution, along with the fact that Homo sapiens are a highly homogenous species – DNA between individuals is very similar. Sex may also be necessary due to the greater ability it has over asexual reproduction to produce new genotypes. This again puts emphasis on the influence of recombination, that allows for two advantageous alleles on the same chromosome of different individuals of a population to eventually combine. In an asexual population it would take a mutation for a chromosome to gain the two alleles. However, this theory depends on group selection, a weaker selective force than natural selection, as individual fitness is not the motivating factor, but benefit to the group is. This is due to the Two Fold Cost of Sex, put forth by John Maynard Smith,[7] whose theory states that asexual reproduction would bare more young than sexual reproduction. In addition the two sexes have to find one another and are only attracted to certain features. Therefore it is unlikely advantageous alleles would come together within a population. However, the Two Fold Cost of Sex has been countered by the fact that species that can produce by both means choose to reproduce sexually over asexually whenever they can. This points to an overall advantage of sexual reproduction in terms of the central theme of survival. Yet, if the main purpose of sexual reproduction in humans is simply survival, then why has the evolution of feeling such as love occurred? It may be suggested that love has the purpose of leading to reproduction, yet without love there would be more indiscriminate mating within the species, increasing variation within the human population. Therefore survival through variation of a species may not be the main purpose of sexual reproduction, drawing more support towards the Red Queen Hypothesis.

Whereas the Tangled Bank Hypothesis refers to recombination of advantageous genes, there is also a theory that the purpose of sexual reproduction is to remove harmful genes caused by mutation through natural selection. Kondrashov’s theory, the Deterministic Mutation Hypothesis,[8] suggests that because each mutations is only slightly damaging, they will accumulate from one generation to the next until the individuals that have gained many mutations die. However, he writes that there must be at least one adverse mutation per generation, per genome, for the theory to apply. This is feasible for humans because of the “genomic deleterious mutation rate … [of] at least 3”[9] in Homo sapiens.

All the above-mentioned approaches hold some weight in determining the reasons for the evolution of sex in humans. The repair and complementation hypothesis is said to result in a human being that has a greater chance of survival, along with the theory that genetic variation is the core reason for sexual reproduction. The Red Queen Hypothesis primarily centres on the ability of sex to allow for greater variation, so other than the Deterministic Mutation Hypothesis, variation is a common theme throughout. This is also the backbone of Darwin’s theory of Natural Selection, where variation leads to individuals with advantageous genes passing these characteristics onto their offspring. Despite the more rapid reproductive potential of agamogenesis, variation, and thus a greater chance of survival, is evidently the advantage that sexual reproduction brings to Homo sapiens.

[1] J. M. Smith, The Evolution of Sex, 1978
[2] C. Darwin, The Effects of Cross and Self
[3] Eshel and Feldman 1970
[4] M. Ridley, The Red Queen: Sex and the Evolution of Human Nature, 1995
[5] A. Weismann, Essays on Hereditary and Kindred Biological Subjects
[6] M. Ghiselin, The Economy of Nature and the Evolution of Sex, 1974
[7] J. M. Smith, The Evolution of Sex, 1978

Should being a creationist automatically disqualify applicants for admission to medical school?

Before answering this question I will explain what being a creationist means, and the different types of beliefs a creationist can hold. Then I shall go on to explore the reasons why you might want to disqualify a medical school applicant on the grounds of being a creationist, the reasons why being a creationist and medical student should not matter, and then reach a conclusion.

Creationism is the belief that each species was created separately, in its current form, by God. It has experienced a rise in popularity in the last century, especially in America. In a 2005 survey, it was found that 42% of Americans believe that ‘humans and other living things have existed in their present form since the beginning of time’.1 Creationism encompasses many different beliefs – the main ones being Young-Earth and Old-earth creationism, which I will explain further.

Young earth creationism is the belief that the bible is literally true; that God created the world in 6 days, and that the world is between 6,000 and 10,000 years old as according to the bible, rather than 4.5 billion years according to most scientists. 2 Young-earth creationists generally view the bible as equal to textbooks in scientific accuracy. A 2009 survey by the Theos Think Tank showed that 11% of people in the UK believed that ‘God created the world sometime in the last 10,000 years’, and 21% thought it was probably true. 1

Old earth creationism itself encompasses several categories such as gap creationism, day-age creationism and progressive creationism, although what defines it from young-earth creationism is the belief that the earth is 4.5 billion years old, rather than 10,000 years old as a literal belief in the bible would make it. Gap creationism is the belief that there was a long period of time between God creating the earth, and the rest of the creation story. Day-age creationists believe that each ‘day’ in the creation story refers to a non-specific period of time (perhaps millions of years) rather than 24 hours. Progressive creationists believe that creation was a progressive, gradual process involving evolution within species (microevolution), but new species were created by God’s intervention and not evolution.

Intelligent Design is a recently-developed belief which stemmed out of creationism. Believers in this theory do not agree with natural selection as the driving factor for evolution, as this is an ‘unguided, purposeless change.’ 3 They believe the best explanation for the complexity of life and the universe is the existence of an intelligent creator.

On one hand, there is an argument that being a creationist should disqualify you from medical school. If the medical student decides to go into research, they may be biased by their creationist views, which may then hinder scientific progress. ‘Creation Science’ is where creationists try to use scientific research to prove the Genesis account of creation, and has been consistently rejected by scientists as valid research. As Darwin himself said, ‘we can only say that…it has pleased the Creator to construct all the animals and plants…but this is not a scientific explanation’.4 It includes creation biology, which attempts to prove that different species did not originate from the same ancestor, and flood geology, which tries to show how a world flood as documented in the bible is compatible with geological evidence. Many of these theories for flood geology were published in the book ‘The Genesis Flood’ by Morris and Whitcomb in 1961. 5 The popular geology journals of the time completely ignored the book, and one review by the ‘ASA Journal’ called them ‘pseudo-scientific pretenders’. Yet despite all this, creation science was taught in many American schools. In 1921, the Butler Act in Tennessee disallowed publicly funded teachers ‘to teach any theory that denies the Story of the Divine Creation of man as taught in the Bible, and to teach instead that man has descended from a lower order of animals’, and this was the law until 1967. This could not have progressed scientific education.

Furthermore, if a medical school applicant was a young-earth creationist, this may suggest that their personality is unsuited to that of a doctor. There is overwhelming scientific evidence for the earth being 4.5 billion years old. Examples of this are radiometric dating, which uses the half-lives of radioactive elements in rocks to determine their ages, observing ice cores, and dendro-chronology – analysis of annual tree rings. 2 An inability to accept this evidence might demonstrate an inflexible, irrational personality. This would be disadvantageous as a doctor, where the ability to work in a team and to consider many different viewpoints is crucial. They might hold their views on contentious issues such as abortion very strongly, perhaps influencing a patient on important decisions and not giving them a balanced view.

However, there are also many arguments why being a creationist should not disqualify an applicant from medical school. To begin with, being a creationist does not necessarily affect the quality of care that a doctor would give a patient, so it is an irrelevant criterion to judge a potential medical student by. Perhaps if they were a young-earth creationist this might indicate more radical views, but old-earth creationists agree with some scientific views such as the age of the earth. Thus they will be less likely to hold radical views on other topics, and other factors such as communication skills and academic ability should be considered more significant. Their belief might even be considered a positive attribute – it suggests they know where they stand on difficult issues, and as a result are wiser, more thought-out doctors.

Furthermore, as creationists believe each species was created separately by God, they may have a high regard for human life, of value as a doctor. The concept of evolution is that the fittest survive and breed, the weaker die out. At the time this led to the belief of eugenics – ‘good genes’. Darwin’s cousin, Dalton, recommended that the fitter, more intelligent, are encouraged to breed, and the less fit are discouraged. Later, this belief was taken up by Hitler, who wanted to rid the world of ‘inferior races’ and populate it with a pure, arian race – a line of thinking totally and utterly opposed to medicine. Perhaps the compassion and desire to help others needed as a doctor is found more in creationist views than in evolutionary principles.

In Conclusion, I believe that it depends on the type of creationist that you are. There is an argument to disqualify Young-Earth Creationists from medical school, as they are disagreeing with a large amount of scientific research on the topic. They might make more irrational doctors and, if they were to go into research, may hinder scientific progress. In terms of other creationist views, as long as these beliefs do not adversely affect your treatment of a patient, it should not disqualify you. It may even suggest the student is more thought-out, and holds human life in high regard – a valuable characteristic for a doctor. Finally, in a letter, Darwin once wrote: ‘There is no reason why the disciples of [religion and science] should attack each other with bitterness, though each upholding strictly their beliefs.’6

1.‘Rescuing Darwin: God and evolution in Britain today’ Nick
Spencer and Denis Alexander, 2009:
Chapter 2: God after Darwin: The twentieth century and the
rise of creationism. Page 26.
Chapter 3: Darwin today. Pages 29-33.
2.‘The Rough Guide to Evolution’ Mark Pallen, 2009.
Creationism: A House Divided, page 284.
How we know the earth is old, pages 12-13.
Can a Christian believe in evolution, pages 280-281.
3. ‘Testing Darwinism’ Philip E Johnson, 1997.
 Chapter 1: Emilio’s Letter. Page 16.
4.‘The Origin of Species’ Charles Darwin, 1859.
Chapter XIV: Morphology. Page 383.
5. ‘The Genesis Flood’ Morris and Whitcomb, 1961.
6.A Letter from Darwin to his friend and local vicar, John Brodie
Innes, on November 27th, 1878.

Wednesday, February 18, 2009

HeLa Cells

“HeLa Cells: ethical nightmare, medical blessing, or evolution of a post-human species?”

HeLa cells are known globally as one of the greatest medical discoveries of our time, and allowed for many - if not all - important medical advances that have occurred during this century. However, despite their unarguable medical importance, their existence is shrouded in controversy, whether it be in their origin, their continual usage, or in the revelation of their unique chromosome number.
The story of HeLa cells begins with Henrietta Lacks, a young mother of five who lived in Baltimore during the 1940-50’s. On February 1st 1951, at Hopkins Hospital, Henrietta was found to have malignant cervical cancer. At the same time in the hospital, the Head of tissue culture research, George Gey, and his wife Margaret, where working to find a cure for cancer. They were sure that if they could culture a line of human cells that could live indefinitely outside of the body, then the cure would soon follow. A sample of Henrietta Lack’s cancer cells were taken, and a young resident who knew that the Geys’ where searching for a new sample to investigate, sent it to the George Gey. Henrietta’s cells turned out to be the break-through that Gey had been waiting for - her cells multiplied like no-one had ever seen before, reproducing an entire generation every 24hours; they were an immortal cell-line, with the acquired ability to proliferate continuously, without any mechanisms of prevention. Gey named the cells “HeLa cells”, in honour of their source’s name, but claimed them as his own discovery, and spent the rest of his life profiteering from them.
This is when the first ethical dilemma arises; Henrietta Lacks had no idea that a sample of her tumour had been taken and sent to George Gey, and that her own cells would be used as a basis for medical research for decades to come. Her husband David knew that a sample had been taken, but was told that it was to see if the cancer was hereditary, and that it might help his children if the cancer struck again. Despite being told this, David Lacks never heard from the research team again. There is a question then, of whether appropriate consent was given. Having looked into this topic, however, I have found that there is no legal requirement to inform the patient or patient’s family in such a case, as any tissue obtained/removed by the physician during surgery, or indeed any medical procedure, in fact then becomes the property of the surgeon, to do with what he/she pleases. Personally, I find this to be an unorthodox rule, particularly in the case of Henrietta Lacks, when the physician in question profited so greatly from the sample he obtained. However, lack of informed consent was common in medical research at the time.
On the 4th October 1951, Henrietta Lacks passed away, and on the same day George Gey appeared on national television with a vial of his “HeLa” cells, stating “It is possible that, from a fundamental study such as this, we will be able to learn a way by which cancer can be completely wiped out.”
Soon after this statement was made in front of the American nation, the HeLa cell-line was used to propagate poliovirus, which then lead to the development of vaccines against polio, a medical triumph that saved thousands of lives, and one that could not have occurred where it not for the unique nature of HeLa cells. In this scenario, we can indeed classify HeLa cells as a “medical blessing” - the fact that Gey and his team managed to propagate the poliovirus so quickly lead to a surge of global interest in the HeLa cell line, and facilities to enable mass-production of HeLa cells was established by the National Foundation for Infantile paralysis. Soon samples of Henrietta’s cells were being bought and sold by millions world-wide, and even went up in the first space missions to see what would happen to human cells in zero gravity.
Here the second ethical dilemma appears. Henrietta’s family had no idea that her cells were being used in such a way - they weren’t ever aware they were still alive. It wasn’t until 24 years after death, when her daughter in law heard about them from a scientist at a dinner party, that the issue came to their attention. The family were unable to afford a lawyer to take the case to court, and so nothing could be done about what the Lacks family believed to be a great injustice. One of Henrietta’s daughters is quoted to have said "We never knew they took her cells, and people done got filthy rich, but we don't get a dime"
As I stated earlier, the HeLa cell-line can be classed as a “medical blessing” as they allowed for previously impossible medical research to be carried out. It can be argued, however, that HeLa cells turned out to be more of a medical curse. After the discovery of HeLa cells, as more and more laboratories began using human cells in culture, scientists were finding that since the introduction of HeLa cells, these other cell cultures also began to multiple indefinitely. For two decades research was done on what were thought to be normal human cells, for example placental cells, but in 1974 it was revealed that HeLa cells - due to their robust nature and ability to spread and multiply so considerably - had contaminated the world’s stock of cell cultures. This meant that billions had been spent on isolated tissue cultures, that turned out to be invalid as they were revealed to be HeLa cells. Although this caused obvious setbacks in the scientific world, I think that some good may have come out of it, as the methods of preparation and study of cell cultures has been vastly improved since then, meaning the potential for human error has been significantly reduced.
Hopefully I have explained sufficiently the ethical and medical dilemmas surrounding the immortal HeLa cells, but what of the other question of the evolution of a post-human species? This question arises when we look at some of the unique features of HeLa cells; mainly that they are unarguably human - haven been extracted directly from a human source - but equally they have 82, instead of 46, Chromosomes in their nuclei (four copies of chromosome 12 and three copies of chromosomes 6, 8 and 17). In theory, if a HeLa cell could produce a gamete, it would not be able to fertilize with a human gamete (due to the different number of chromosomes) - which, by laws of classification, means that HeLa cells are an entirely new species - Helacyton gartleri - to use it’s proper given name.
It would appear that through the mechanism of random mutations, the cervical cancer cells in Henrietta Lacks body found a selective advantage, that allowed them to proliferate indefinitely and to live for an extended length of time. But what is this selective advantage that has allowed HeLa cells to flourish? Research has shown that HeLa cells have an active form of the enzyme Telomerase (not present in normal cells, but can be found in most cancerous cells). Telomerase is active during cell division, and prevents the shortening of telomeres - a mechanism that is associated with ageing and eventual cell death.
It may be argued then, that cancer cells are a mode of “microevolution“; the cells within us are subject to the same laws of natural selection as we are, with HeLa cells emerging as the next branch of the evolutionary tree.
To conclude, I think that HeLa cells are all of an ethical nightmare, medical blessing, and evolution of a post-human species. The Lacks family never gave their consent for Henrietta’s cells to be used in research, neither did they receive any acknowledgment for the great medical advances Henrietta gave to the world. However, since the cells are technically not human, does the family have any right to them? Certainly this can be classified as an ethical nightmare.
From looking at the medical advances HeLa cells have provided, I would agree that they are a medical blessing, especially in light of their ability to keep their telomeres intact, which is now being used in medical research to see if the same mechansim can elongate human life.'Brien_PNAS.pdf

Monday, February 16, 2009

Purpose of this blog

This blog has been created by Professor Mark Pallen for the purpose of allowing medical students to blog on selected topics during a one-week student-selected activity. Students who choose to blog, should post their work (1000-2000 words) by 5pm on Thursday 19th February 2009.