Applied Clinical Trials
Adequate female representation is mandatory, but gender differences should not be overestimated.
Women's participation in clinical studies and potential gender differences: these are fashionable topics taken up frequently by the lay media during the last few years. The discussion about women as subjects in clinical research, however, is not new; in contrast, it has a more than four-decade history, particularly in the United States. But the starting points of the discussion many years ago and the issues of today are very different. What happened in between?
Several medical tragedies during the 1960s and 1970s prompted public interest in protecting women and consecutively influenced worldwide regulatory authorities. The two most meaningful events will be briefly mentioned:
Although these unfortunate medical events mainly occurred in the context of medical practice rather than in the context of research, clinical research was widely perceived as risky and of minor benefit to research participants.
4
Investigators were reluctant to include women in clinical trials, mainly due to safety and legal liability concerns. Subsequent regulations aimed at protection of vulnerable research subjects. In 1977, the FDA issued the guideline "General Considerations for the Clinical Evaluation of Drugs," excluding women of childbearing potential from Phase I and early Phase II studies of new drugs until reproductive toxicity studies were conducted and some evidence of effectiveness had become available.
5
Although the exclusion of women explicitly did not apply to women with life-threatening diseases, the guideline further contributed to a general lack of women in clinical studies. As a consequence, data derived from a predominantly male population were extrapolated for clinical use in women. In the 1980s, there was growing recognition of the value of individualized therapy and that pharmaceuticals may need to be administered differently in different populations.6 Nevertheless, the 1985 New Drug Application Rewrite, pointing out the need to consider dosing in relevant subpopulations, specifically mentioned just older patients, children, and patients with impairment of renal function—but not women.
The 1977 FDA guideline raised some ethical questions, e.g., whether it was adequate that a guideline supposes women's inability to take appropriate measures to avoid pregnancy and that the protection of the fetus would in principal outweigh the interest of ill women. In the late 1970s, women's rights and the respect for their autonomy and decision-making capacity received increased attention.
7
In addition, patient advocacy groups such as AIDS activists played an important role for changing an overprotective policy when demanding women's equal access to experimental therapies in the early development phases.
8
Furthermore, during the 1980s there was evidence that gender differences in pharmacokinetics and pharmacodynamics may be meaningful and may have impact on drug safety, efficacy or dosing.9 The applicability of research data obtained from men to treatment of women became questionable. Due to an overall lack of gender-related data, however, the question of the clinical significance of differences in pharmacokinetics (which often have shown to be only subtle) remained open too.
In 1988, the FDA specified its expectations for adequate analysis of NDA data: The "Guideline for the Format and Content of the Clinical and Statistical Sections of New Drug Applications" requested evaluation of pharmacokinetics, dose response, efficacy, and safety data in order to detect potential gender differences, carried out for both individual studies and in the overall clinical summaries.
10
Nevertheless, less than 50% of submitted studies and only around 60% of overall NDA data presentations provided these analyses in the following years,7,11 demonstrating the limits of this guideline. Moreover, only the importance of analysis of already existing demographic data in NDA applications was emphasized; until the early 1990s, there were no guidelines or regulations that addressed the inclusion of women in clinical drug studies.
Two primary documents in 1993 expressed the public health policy paradigm shift that lifted the 16-year-old restrictions of the 1977 guideline. The NIH Revitalization Act,12 demanding the adequate inclusion of women in NIH-sponsored studies, was signed into law in June 1993. And six weeks later the FDA released the "Guideline for the Study and Evaluation of Gender Differences in the Clinical Evaluation of Drugs," requesting routine inclusion of women already in the earliest phases of clinical studies.13 According to the 1993 FDA guideline, pharmacokinetic studies should be performed for the assessment of potential gender differences. Clinical studies in general should reflect the population that will receive the drug when it is marketed, and enough women should be enrolled to allow the detection of clinically relevant gender differences in drug response.
As FDA guidance documents do not legally bind either FDA or drug sponsors, two regulations with the force of law were later put in place: The 1998 "Final Rule: Investigational New Drug Applications and New Drug Applications"14 empowered the FDA to refuse any NDA without appropriate gender analysis. Nevertheless, the 1998 Final Rule is less specific than the 1993 guidance, as it requires only NDA data presentations separately for men and women but no formal analysis or discussion of the data. In addition, there are no standards for inclusion of women. Finally, the Clinical Hold Rule15 of June 2000, as the second legally binding regulation, even allows the FDA to place a study or an IND on hold if a sponsor excludes women from a study drug intended for treatment of a life-threatening disease.
Both in Europe (particularly as a consequence of the thalidomide event) and in Japan (traditionally), clinical research was a research in male subjects until the 1990s. Afterwards, the further development of regulatory standards in Europe and Japan was decisively influenced by the International Conference on Harmonization (ICH) process. A number of implemented ICH guidelines address gender issues today. Guideline E8
16
(General Considerations for Clinical Trials) requires a study population representative for the target patient population and also Phase I pharmacokinetic information in women. Dose response data need to be obtained for relevant subpopulations, e.g., according to gender (Guideline E4 on Dose Response Information).
17
Guidelines E3 (Structure and Content of Study Reports)
18
and M4E (Efficacy part of the Common Technical Document)
19
call for characterization of the patient population, analyses, and critical assessment of the data with respect to gender. All of these international guidelines have led to a stepwise approach to recent developments in the United States targeting appropriate representation of women in clinical studies and adequate analyses for gender-related effects.
With respect to the timing of reproduction toxicity studies there are currently regional differences; whereas in the United States women of childbearing potential may be included in early clinical studies without reproduction toxicity information, female fertility and embryo-fetal development assessment is mandatory prior to inclusion of these women in any clinical trial in Japan, reflecting the strictest regulation of the ICH region.20 Neither in Europe nor in Japan do regulations exist that determine the number of women to be included in clinical studies.
Five recently performed investigations are currently available (three in the U.S., one in Europe, and one in Japan) that address the question of whether women have been adequately enrolled in clinical drug development studies. In a 2001 report of the U.S. General Accounting Office,
21
36 New Drug Applications (NDAs) involving 176,000 subjects—which the FDA had approved or categorized as approvable and labelled for use in both men and women between August 1998 and December 2000—were reviewed. Across all clinical studies for all NDAs, 52% of the study participants were women. A power analysis confirmed that in all NDAs examined, enough women were enrolled in pivotal studies in order to demonstrate a drug's efficacy in female subjects.
21
The FDA itself performed two studies, one on 185 new molecular entities (NMEs) that were approved by the Center for Drug Evaluation and Research between 1995 and 1999 (with a total of 493,000 subjects)22 and the other on 33 product license applications (PLAs) for new biological products approved by the Center for Biologics Evaluation and Research during the same time interval.23 The average percentage of females enrolled across all applications was 49% (NME) and 45% (PLA), respectively. In all three U.S. reviews, the general representation of males and females in clinical studies appeared to be similar. Focusing on women's participation by indication, the largest of the three studies demonstrated a range from 32% for antiviral products to 83% for urologic products.22 In only four areas, women comprised less than half of the study participants: cardiorenal, oncology, medical imaging, and—as mentioned above—antiviral drugs. More important than the absolute number of women, however, was that the relative percentage of women enrolled reflects the overall disease prevalence in the female population. This was, for instance, the case for studies in cardiovascular indications, as a 38% proportion of women22 is relatively close to an estimated prevalence of the most frequent cardiovascular diseases among females in the general population—which is slightly above 40%.24
The proportion of women included in different phases of drug development, however, varied remarkably. A common finding of the U.S. drug application reviews was the relative lower proportion of women in Phase I and early, small-scale Phase II studies of around 22 to 25%.21,22 The majority of applications contained gender-related analyses of safety and efficacy, and, consecutively, gender-related statements which additionally indicate that sponsors of the clinical studies were aware of the need to look for gender effects.
Both the European Medicines Agency (EMEA) and the Japanese Ministry of Health, Labour and Welfare, together with the Japanese Pharmaceutical Manufacturers Association, have performed surveys comparable to those in the United States.25 The EMEA reviewed 240 pivotal clinical studies of 84 marketing applications filed between 2000 and 2003. In the Japanese survey, the clinical studies of 60 NMEs approved between 2001 and 2003 were reviewed. The European and Japanese surveys confirmed the U.S. results: the representative and nearly equal participation of women as compared to men in late Phase II and Phase III studies and a lower participation of women in early development phase studies.
Women have been shown to be, in general, sufficiently represented in clinical studies aiming at drug approval. Drugs are developed globally today, and the U.S. guidelines (and the ICH guidelines) have demonstrated a worldwide impact. Sponsors of clinical drug studies are aware of the need to look for adequate women's enrollment. Although inclusion of women in early development phase studies can still be improved, it is of higher importance that the number of women enrolled is sufficient to derive gender-specific differences early in development. Early studies are important to measure how women and men absorb, metabolize, and eliminate a drug and to get dose-related safety and efficacy information. As long as enough women are enrolled in these studies to set a rationale basis for the dosage amounts in subsequent confirmatory studies—and this seems to be the case today—a relatively lower number of women as compared to men should not be relevant.
As a consequence of the adequate enrollment of women in clinical drug studies, the EU commission has recently declined to implement a specific regulation on women. Also, the International Conference on Harmonization declined the need for a separate ICH guideline on women as a special population in clinical trials.25
The situation appears to be different if drug studies published in biomedical journals are the source for analysis. The underlying type of studies mainly consist of nonindustry sponsored, i.e., academically driven studies that primarily focus on disease management and final clinical outcome rather than on a single drug's efficacy and safety. It has been shown that in published studies (analyzed in recent representative systematic reviews
24,26-30
) the situation is still not satisfactory: women are still underrepresented—which is also the case for publicly funded studies—and particularly in mortality and cardiovascular studies.
24,27-30
Compared to the disease prevalence in women, remarkably low study enrollment rates concern women with heart failure.
30
Still, existing exclusion or underenrollment of women in the area outside drug development regulations may be due to a variety of reasons such as legal liability concerns, the higher complexity of the informed consent process, additional efforts with pregnancy testing and adequate birth control during the study, and generally more difficulties in recruiting women, e.g., due to child care and lack of financial independency.
24,27
The resolution of this problem is critical for a disease management respecting potential gender-related differences. This will be demonstrated later in this article by a specific example (primary prevention of cardiovascular disease).
Already in the 1980s, gender-related differences in pharmacokinetics (PKs) were identified for some drugs,
9
including theophylline, several benzodiazepines, and lidocaine. In the 1990s a wave of research activities on the impact of gender differences on drug therapy began.
One needs to realize that both general and specific gender differences exist: Apart from female-specific issues such as variations in the hormone levels throughout the menstrual cycle or menopause, there are also basic physiological differences between women and men that also influence reactions to drugs. The lower glomerular filtration rates of certain drugs in women, for instance, may be merely a body weight effect,31 and the gender differences in the PK of lipophilic drugs or alcohol are effects of different body composition, i.e., of the higher percentage of body fat in women than in men and thus a different volume of distribution.32 There are various examples of drugs with differences in PK related to general gender differences, such as body weight, organ size or body composition, e.g., diazepam, vancomycin, ofloxacin, and cefotaxime.31-34 In some cases, such as diazepam or ethanol, this can become important; in general, except for extreme cases of obesity, differences in body weight and composition have only minor or moderate effects on the pharmacokinetic profile of drugs.
The differences in PK with respect to drug transporters, metabolizing enzymes, protein binding or gastrointestinal physiology (absorption and bioavailability), however, have been shown to be generally only subtle and their overall clinical relevance remains questionable.31,34 There are again limited examples in which a different metabolism, e.g., increased CYP3A4 activity or reduced p-glycoprotein activity in females, becomes clinically significant: the blood pressure-lowering drug verapamil and the antibiotic erythromycin, for instance, appear to be more effective in women than in men.33,34 And gender-related differences in liver enzyme activities and p-glycoprotein MDR1 expression may be an important reason for different responses to antidepressants, e.g., selective serotonin reuptake inhibitors (SSRI)33,35
With respect to gender differences in renal elimination, a well-known example is that of digoxin: the body weight-adjusted glomerular filtration rate and the renal clearance are around 10% lower in women; the lower clearance and consecutively higher digoxin concentrations might be the reason for a slightly increased death rate in women with heart failure.36,37 Nevertheless, impressive examples like that of tirilazad, which directly demonstrate a high clinical significance of a PK gender difference, are rare: Approval of tirilazad, a drug that was designed for improvement of outcome in patients with ruptured aneurysms, was at first denied by the FDA as efficacy was shown only in men,38 probably due to a much higher clearance of the drug in women.
Another question concerns the general clinical relevance of gender-related differences in pharmacodynamics. Women have a 1.5 to 1.7 greater risk for adverse drug reactions than men.39 And eight of ten prescription drugs withdrawn from the U.S. market between 1997 and 2000 caused greater health risk for women than men.21 It is, however, difficult to separate a dosing effect (as women generally receive a higher dose related to body weight) or a true pharmacokinetic effect from a pharmacodynamic effect. There are some examples of gender-related differences in pharmacodynamics in the literature, e.g., it is known that premenopausal women respond better to SSRI than men, whereas men respond better to tricyclic antidepressants.33,34 Furthermore women have a lower response to some analgetics40 and a higher risk for developing drug-induced cardiac arrhythmia.41 Life-threatening ventricular torsades de pointes arrhythmia may occur after intake of a variety of drugs, such as antihistamines, antibiotics or antipsychotics. Women show less benefit than men from oral anticoagulants and thrombolytic agents (with respect to mortality), but the latter cause more bleeding episodes in women.42
Randomized trials have shown in the past that low-dose aspirin is effective in the primary prevention of myocardial infarction in men, with little effect on the risk of ischemic stroke. Data in women were rare. In a recently published primary prevention study designed to address this question in nearly 40,000 women, aspirin significantly lowered the risk of ischemic stroke without affecting the risk of myocardial infarction.43 Surprisingly, aspirin may affect women and men differently—the reasons remain unclear. This example demonstrates the necessity of addressing gender effects in large-outcome studies too.
Despite these examples, an important question remains: How often do such clinically significant differences in pharmacokinetics and/or pharmacodynamics between men and women in fact occur? The review of new molecular entities approved from 1995 to 1999 by the Center for Drug Evaluation and Research22 showed that gender-related differences were detectable in only 41 of 185 drugs (22%). In the case of differences, these were 90% due to pharmacokinetics and in general not leading to different treatment recommendations. Only five drugs' differences concerned safety and two drugs concerned efficacy. No products required a change in the dosage based on gender differences. The analyses for biological products, as performed by both the Center for Biologics Evaluation and Research and the study sponsors, did in general also not show differences in the safety or efficacy outcome with respect to gender.23 There were a few products for which gender differences were noted; however, the significance level of these findings was questionable.23
The general clinical significance of gender differences seems to be overestimated today. Differences in pharmacokinetics between women and men exist, however they generally do not have impact on drug dosing. Most drugs on the market exhibit a sufficiently wide therapeutic index; minor PK differences usually do not reach a clinical significance level. Clinical significance will most probably only be achieved in such cases when extensive gender-specific pharmacological differences exist and for drugs with a narrow therapeutic index, a steep dose-concentration curve, or both (for examples, digoxin36,37 and tirilazad38). Gender-dependent differences in pharmacodynamics may become clinically important, but overall they are not frequent. Differences between individual patients, e.g., in body weight, body fat distribution, training status or organ functions, are at least as important as gender-specific issues.
Nevertheless, it is important that women are sufficiently enrolled in clinical studies so that clinically significant gender differences can be detected and labelled. At least in the United States, the majority of product labelling today contains references to gender assessment.22 In most cases there are no gender-related differences in the labelling or just minor ones, i.e., clinically insignificant PK differences (with a less than 15% difference in Cmax or AUC).
Women are today sufficiently represented in clinical drug studies aimed at drug approval. Inclusion of women in early drug development studies may be improved. In academically driven studies, there is still an underrepresentation of women. In individual cases where clinically significant gender-related differences exist, appropriate women enrollment rates in clinical drug studies and subsequent adequate analyses for potential gender effects are important. As shown, women are at higher risk for drug-induced life-threatening ventricular arrhythmia and may benefit differently from drugs, e.g., from aspirin in the primary prevention of cardiovascular disease. In particular, large outcome studies need to look for potential gender differences. In general, however, gender differences should not be overestimated. For the vast majority of drugs, no significant gender effects exist. Differences between individual patients seem to be in general more important than gender-specific differences.
1. G.A. Christie, "Thalidomide and Congenital Abnormalities,"
Lancet
, 2, 249 (1962).
2. S. Melnick, P. Cole, D. Anderson, A. Herbst, "Rates and Risks of Diethylstilboestrol Related to Clear Cell Adenocarcinoma of the Vagina and Cervix," New England Journal of Medicine, 316, 514–516 (1987).
3. A.L. Herbst, H. Ulfelder, D.C. Poskanzer, "Adenocarcinoma of the Vagina: An Association of Maternal Stilboestrol Therapy with Tumour Appearance in Young Women," New England Journal of Medicine, 284, 878–881 (1971).
4. D.T. Wright and N.J. Chew, "Women as Subjects in Clinical Research," Applied Clinical Trials, 44–54 (September 1996).
5. U.S. Department of Health, Education, and Welfare, "General Considerations for the Clinical Evaluation of Drugs, HEW (FDA) 77-3040" (Government Printing Office, Washington, September 1977).
6. Food and Drug Administration, "Executive Summary—Gender Studies in Product Development," www.fda.gov/womens/gender/Exec4.htm (accessed 20 February 2005).
7. R.B. Merkatz, R. Temple, S. Sobel et al., "Women in Clinical Trials of New Drugs. A Change in Food and Drug Administration Policy," New England Journal of Medicine, 329, 292–296 (1993).
8. T. McGovern, M. Davis, M.B. Caschetta, "Inclusion of Women in AIDS Clinical Research: A Political and Legal Analysis," Journal of the American Medical Women's Association, 49, 102 –104, 109 (1994).
9. J.A. Hamilton and B. Parry, "Sex-related Differences in Clinical Drug Response: Implications for Women's Health," Journal of the American Medical Women's Association, 38, 126–132 (1983).
10. Center for Drug Evaluation and Research, "Guideline for the Format and Content of the Clinical and Statistical Sections of New Drug .Applications" (Government Printing Office, Washington, July 1988).
11. U.S. General Accounting Office, "Women's Health: FDA Needs to Ensure More Study of Gender Differences in Prescriptive Drug Testing" (Government Printing Office, Washington, 29 October 1992).
12. National Institutes of Health Revitalization Act, Public Law 103-43 (1993).
13. U.S. Department of Health and Human Services, "Guideline for the Study and Evaluation of Gender Differences in the Clinical Evaluation of Drugs," Federal Register, 58, 39406 (22 July 1993).
14. U.S. Department of Health and Human Services, Food and Drug Administration, "Final Rule on Investigational New Drug Applications and New Drug Applications," Federal Register, 63, 6854–62 (11 February 1998).
15. U.S. Department of Health and Human Services, Food and Drug Administration, Final Rule "Investigational New Drug Applications: Amendment to Clinical Hold Regulations for Products Intended for Life-Threatening Diseases and Conditions," Federal Register, 65, 34963–71 (1 June 2000).
16. International Conference on Harmonization, Guideline E8 "General Considerations for Clinical Trials," www.ICH.org/MediaServer.jser?@ID=484&@_MODE=GLB (accessed 26 February 2005).
17. International Conference on Harmonization, "E4 Guideline on Dose Response Information to Support Drug Registration," www.ICH.org/MediaServer.jser?@_ID=480&@_MODE=GLB (accessed 26 February 2005) .
18. International Conference on Harmonization, Guideline E3 "Structure and Content of Clinical Study Reports," www.ICH.org/MediaServer.jser?@_ID=479&@_MODE=GLB (accessed 26 February 2005) .
19. International Conference on Harmonization, "The Common Technical Document for the Registration of Pharmaceuticals for Human Use. Efficacy -M4E," www.ICH.org/MediaServer.jser?@_ID=561&@_MODE=GLB (accessed 26 February 2005).
20. International Conference on Harmonization, Guideline M3(M) "Maintenance of the ICH Guideline on Non-Clinical Safety Studies for the Conduct of Human Clinical Trials for Pharmaceuticals," www.ICH.org/MediaServer.jser?@_ID=506&@_MODE=GLB (accessed 26 February 2005).
21. U.S. General Accounting Office, "Women's Health. Women Sufficiently Represented in New Drug Testing, but FDA Oversight Needs Improvement," (July 2001), www.gao.gov/new.items/d01754.pdf (accessed 20 February 2005).
22. Center for Drug Evaluation and Research, "Women's Participation in Clinical Trials and Gender-Related Labeling: A Review of New Molecular Entities Approved 1995–1999," (June 2001), www.fda.gov/cder/reports/womens_health/women_clin_trials.htm (accessed 13 February 2005).
23. Center for Biologics Evaluation and Research, "Participation of Females in Clinical Trials and Gender Analysis of Data in Biologic Product Applications," (April 2001), www.fda.gov/cber/clinical/femclin.htm (accessed 25 February 2005).
24. D.J. Harris and P.S. Douglas, "Enrollment of Women in Cardiovascular Clinical Trials funded by the National Heart, Lung, and Blood Institute," New England Journal of Medicine, 343, 475–480 (2000).
25. International Conference on Harmonization, "Gender Considerations in the Conduct of Clinical Trials," Regional Experience (November 2004), www.ICH.org/UrlGrpServer.jser?@_ID=276&@_TEMPLATE=254 (accessed 26 February 2005).
26. C.L. Meinert, A.K. Gilpin, A. Üalp, C. Dawson, "Gender Representation in Trials," Controlled Clinical Trials, 21, 462–475 (2000).
27. K. Ramasubbu, H. Gurm, D. Litaker, "Gender Bias in Clinical Trials: Do Double Standards Still Apply?," Journal of Women's Health & Gender-Based Medicine, 10, 757–764 (2001).
28. P.Y. Lee, K.P. Alexander, B.G. Hammill, S.K. Pasquali, E.D. Peterson, "Representation of Elderly Persons and Women in Published Randomized Trials of Acute Coronary Syndromes," Journal of the American Medical Association, 286, 708–713 (2001).
29. S. Bandyopadhyay, A.J. Bayer, M.S. O'Mahony, "Age and Gender Bias in Statin Trials," Quality Journal of Medicine, 94, 127–132 (2001).
30. A. Heiat, C.P. Gross, H.M. Krumholz, "Representation of the Elderly, Women, and Minorities in Heart Failure Clinical Trials," Archives of Internal Medicine, 162, 1682–1688 (2002).
31. B. Meibohm, I. Beierle, H. Derendorf, "How Important Are Gender Differences in Pharmacokinetics?" Clinical Pharmacokinetics, 41, 329 –342 (2002).
32. R.Z. Harris, L.Z. Benet, J.B. Schwartz, "Gender Effects in Pharmacokinetics and Pharmacodynamics," Drugs, 50, 222–239 (1995).
33. Society For Women's Health Research, "Sex Differences in Response to Pharmaceuticals, Tobacco, Alcohol and Illicit Drugs," www.womens-health.org/hs/facts_dat.htm (accessed 04 March 2005).
34. G.D. Anderson, "Sex and Racial Differences in Pharmacological Response: Where Is the Evidence? Pharmacogenetics, Pharmacokinetics, and Pharmacodynamics," Journal of Women's Health, 14, 19–29 (2005).
35. R.R. Bies, K.L. Bigos, B.G. Pollock, "Gender Differences in the Pharmacokinetics and Pharmacodynamics of Antidepressants," Journal of Gender Specific Medicine, 6, 12–20 (2003).
36. S.S. Rathore, Y. Wang, H.M. Krumholz, "Sex-Based Differences in the Effect of Digoxin for the Treatment of Heart Failure, " New England Journal of Medicine, 347, 1403–1411 (2002).
37. S.S. Rathore, J.P. Curtis, Y. Wang, M.R. Bristow, H.M. Krumholz, "Association of Serum Digoxin Concentration and Outcomes in Patients with Heart Failure," Journal of the American Medical Association, 289, 871–878 (2003).
38. N.F. Kassell, E.C. Haley Jr., C. Apperson-Hansen, W.M. Alves, "Randomized, Double-Blind, Vehicle-Controlled Trial of Tirilazad Mesylate in Patients with Aneurysmal Subarachnoid Hemorrhage: A Cooperative Study in Europe, Australia, and New Zealand," Journal of Neurosurgery, 84, 221–228 (1996).
39. M. Rademaker, "Do Women Have More Adverse Drug Reactions?" American Journal of Clinical Dermatology, 2, 349–351 (2001).
40. L.S. Sun, "Gender Differences in Pain Sensitivity and Responses to Analgesia," Journal of Gender-Specific Medicine, 1, 28–30 (1998).
41. S.N. Ebert, X.-K. Liu, R.L. Woosley, "Female Gender as a Risk Factor for Drug-Induced Cardiac Arrhythmias: Evaluation of Clinical and Experimental Evidence," Journal of Women's Health, 7, 547–557 (1998).
42. W.M. Davis, "Impact of Gender on Drug Response," Drug Topics, October 5 (1998), www.pharm.chula.ac.th/Surachai/academic/ContEd/Impact%20of%20gender.pdf (accessed 05 March 2005).
43. P.M. Ridker, N.R. Cook, I.-M. Lee et al., "A Randomized Trial of Low-Dose Aspirin in the Primary Prevention of Cardiovascular Disease in Women," New England Journal of Medicine, published online at www.nejm.org on 07 March 2005.
Peter Kleist, MD, is a specialist in pharmaceutical medicine (CH) responsible for special projects with the Swiss Agency for Therapeutic Products, Hallerstrasse 7, CH-3009 Bern, Switzerland, e-mail: peter.kleist@bluewin.ch.
The views of the author do not necessarily reflect the position of the Swiss Agency.