Developments, news and strategies for drug development specific to phase I through Phase III global clinical trial management, execution, project management and outsourcing. Go→
News, articles and issues specific to clinical trial practice and implementation at the investigative site level. Go→
Strategies and innovations related to how clinical professionals are utilizing technology internal and external to their organizations to advance clinical trials. Go→
News, articles and issues specific to laboratories role in the clinical trial, including ECG, imaging, genotyping, tissue samples and more. Go→
News, developments and strategies for clinical trials conduct in relation to the FDA, EMEA and other global regulatory authorities overseeing the drug development industry. Go→
News, articles and strategies related to clinical trial design which impact postmarketing studies, therapeutic areas, adaptive trials, statistics, protocols and more. Go→

For the assessment of the effectiveness and safety of an investigational pharmaceutical entity, ideally the scale for the primary clinical endpoint should be numerically continuous to provide an accurate and reliable assessment. In practice, however, it is impossible, or can be extremely expensive, to measure responses quantitatively. On the other hand, patients' responses to treatments can be easily documented according to the occurrence of some meaningful and welldefined event such as death, infection, or cure of a certain disease and any serious adverse events. In addition the intensity of these events can be graded according to some predefined categories. Therefore categorical data can be useful surrogate endpoints for some unobserved latent continuous variables in clinical trials. Sometimes, to provide an easy analysis and/or a better presentation of the results, continuous data are transformed to categorical data with respect to some predefined criteria. As a result, many efficacy and safety endpoints in clinical trials are in the form of categorical data on either a nominal or ordinal scale.^{2} Different statistical methods are used to analyze this data in clinical trials.
The different statistical methods used to analyze the categorical data will be described in detail and the limitations of each method will be discussed. Univariate and bivariate statistical methods Chisquare test. The chisquare test of independence is used to test the association between two categorical variables. The idea behind this test is to compare the observed frequencies with the frequencies that would be expected if the null hypothesis of no association/statistical independence were true. By assuming the variables are independent, we can also predict an expected frequency for each cell in the contingency table. If the value of the test statistic for the chisquared test of association is too large, it indicates a poor agreement between the observed and expected frequencies and the null hypothesis of independence/no association is rejected. For example in clinical trials, it will be used to test the association between adverse event and the treatment used. The chisquare test has some assumptions: independent random sampling, no more than 20% of the cells has an expected frequency less than five, and no empty cells. If the chisquare test shows significant result, then we may be interested to see the degree or strength of association among variables, but it fails to explain another situation where more than or equal to 20% of the cells have an expected frequency less than five. In this case, the usual chisquare test is not valid. Then the Fisher Exact test will be used to test the association among variables. This methods also fails to give the strength of association among variables. Cochran Armitage trend test. The investigator may be interested to see the trend in response rate among the different doses of a drug. The Cochran Armitage trend test will be used for this purpose. It tests for trends in binomial proportions across levels of a single factor or covariate. This test is appropriate for a twoway table where one variable has two levels and the other variable is ordinal. The twolevel variable represents the response, and the other variable represents an explanatory variable with ordered levels. McNemar test. In clinical trials, we often test the improvement in response rate after a particular treatment. The statistical test used for this purpose is the McNemar test. It is a test on a 2x2 classification table, used to test the difference between paired proportions (e.g., studies in which patients serve as their own control, or in studies with before and after design). Assumptions: sample member must be randomly drawn from the population, within group sample scores must be independent of each other, and no expected frequencies should be less than five. The Cochran Q test is an extension to the McNemar test for related samples that provides a method for testing the differences between three or more matched sets of frequencies or proportions. Kappa statistics. Kappa statistics are used for measuring agreement or association among raters. In clinical measurement, comparison of a new measurement technique with an established one is often needed to check whether they agree sufficiently for the new to replace the old. Correlation is often misleading.^{5 } The kappa coefficient (k) is used to assess interrater agreement. One of the most important features of the kappa statistic is that it is a measure of agreement, which naturally controls for chance. If there is complete agreement, k=1. If the observed agreement is greater than or equal to chance agreement, k=>0. If the observed agreement is less than or equal to chance agreement k=<0. The kappa coefficient (k) can be classified as follows:
Wilcoxon signedrank test. The Wilcoxon signedrank test is a nonparametric statistical hypothesis test for the case of two related samples or repeated measurements on a single sample. It can be used as an alternative to the paired student's ttest when the population can't be assumed to be normally distributed. Mann–Whitney U test. The Mann–Whitney U test (also called the Mann–Whitney–Wilcoxon (MWW), Wilcoxon ranksum test, or Wilcoxon–Mann–Whitney test) is a nonparametric test for assessing whether two independent samples of observations come from the same distribution. It is one of the bestknown nonparametric significance tests. It was proposed initially by Wilcoxon (1945), for equal sample sizes, and extended to arbitrary sample sizes and in other ways by Mann and Whitney (1947). It can be used as an alternative to the independent student's ttest when the population can't be assumed to be normally distributed. KruskalWallis test. The KruskalWallis test is a nonparametric method for testing equality of population medians among groups. Intuitively, it is identical to a oneway analysis of variance with the data replaced by their ranks. It is an extension of the MannWhitney U test to three or more groups. Friedman's test. A nonparametric test (distributionfree) used to compare observations repeated on the same subjects. This test is an alternative to the repeated measures ANOVA, when the assumption of normality or equality of variance is not met. This, like many nonparametric tests, uses the ranks of the data rather than their raw values to calculate the statistic. If there are only two measures for this test, it is equivalent to the sign test. Odds ratio (OR). The odds ratio is the ratio of the odds of an event occurring in one group to the odds of it occurring in another group. It is used to assess the risk of a particular outcome (or disease) if a certain factor (or exposure) is present. The odds ratio is a relative measure of risk, telling us how much more likely it is that someone who is exposed to the factor under study will develop the outcome as compared to someone who is not exposed. For a 2x2 table:
Relative risk (RR). Relative risk is a ratio of the probability of the event occurring in the exposed group versus a nonexposed group. In clinical trials, it is used to compare the risk of developing a disease in people not receiving the treatment (or receiving a placebo) versus people who are receiving the treatment. Alternatively, it is used to compare the risk of developing a side effect in people receiving a drug as compared to the people who are not receiving the treatment. For a 2x2 table:
Sensitivity, specificity, PPV, and NPV Sensitivity. The probability of the test finding disease among those who have the disease or the proportion of people with disease that are positive with the test. Specificity. The probability of the test finding no disease among those who do not have the disease or the proportion of people free of a disease who have a negative test. Positive predictive value (PPV). The percentage of people with a positive test result who actually have the disease. Negative predictive value (NPV). The percentage of people with a negative test who do not have the disease. Regression methods used in clinical trials
The independent or predictor variables in logistic regression can take any form. That is, logistic regression makes no assumption about the distribution of the independent variables. They do not have to be normally distributed, linearly related, or of equal variance within each group. The relationship between the predictor and response variables is not a linear function in logistic regression; instead, the logistic regression function is used, which is the logit transformation of q. Where α = the constant of the equation and, β = the coefficient of the predictor variables. Goodness of fit test ^{ 6 }
Conditional logistic regression. It is used to investigate the relationship between an outcome and a set of prognostic factors in matched casecontrol studies. The outcome is whether the subject is a case or a control. If there is only one case and one control, the matching is 1:1. Multinomial logistic regression. An extension of the binary logistic regression, it is used when the dependent variable has more than two nominal (unordered) categories. In multinomial logistic regression the dependent variable is dummy coded into multiple 1/0 variables. There is a variable for all categories but one, so if there are M categories, there will be M1 dummy variables. All but one category has its own dummy variable. Each category's dummy variable has a value of 1 for its category and a 0 for all others. One category, the reference category, doesn't need its own dummy variable, as it is uniquely identified by all the other variables being 0. The multinomial logistic regression then estimates a separate binary logistic regression model for each of those dummy variables. The result is M1 binary logistic regression models. Each one tells the effect of the predictors on the probability of success in that category, in comparison to the reference category. Each model has its own intercept and regression coefficients—the predictors can affect each category differently.
Model goodness of fit ^{ 7 }
Loglinear models. The loglinear model is one of the specialized cases of generalized linear models for poisson distributed data. Loglinear analysis is an extension of the twoway contingency table where the conditional relationship between two or more discrete, categorical variables is analyzed by taking the natural logarithm of the cell frequencies within a contingency table. Cochran Mantel Haenszel (CMH) test. It is used to test the conditional independence in 2x2xK tables. It is a nonmodel based test used to identify confounders and to control for confounding in the statistical analysis. The CMH can be generalized to IxJxK tables. Linear mixed models (LMM). Handles data where observations are not independent. That is, LMM correctly models correlated errors, whereas procedures in the general linear model family (GLM) usually do not. LMM is a further generalization of GLM. Random factors are categorical variables where only a random sample of possible category values is measured. Random effects models are models with only one or more random factors and optional covariates as predictors. Fixed factors are categorical variables where all possible category values (levels) are measured. Fixed effects models are models with only fixed factors and optional covariates as predictors. Mixed models have both fixed and random factors as well as optional covariates as predictors. Hierarchical linear models (HLM) are a type of mixed model with hierarchical data—that is, where data exist at more than one level. Random coefficients models (RC), also called multilevel regression models, are a type of mixed model with hierarchical data and each group at the higher level is assumed to have different regression slopes as well as different intercepts for purposes of predicting an individuallevel dependent variable. Generalized estimating equations (GEE). The method of generalized linear models (GLM) is an integral part of the data analyst's toolkit, as it encompasses many models under one roof: logistic and probit regressions, ordinary least squares, ordinal outcome regression, and regression models for the analysis of survival data etc., however, it is inadequate when the data are longitudinal or are otherwise grouped so that observations within the same group are expected to be correlated. The method of generalized estimating equations (GEE) is a generalization of GLM that takes into account this withingroup correlation. The GEE method is a practical strategy for the analysis of repeated measurements, particularly categorical repeated measurements. It provides a way to handle continuous explanatory variables, a moderate number of explanatory categorical variables, and timedependent explanatory variables. It handles missing values, that is, the number of measurements in each cluster can vary from 1 to t. The following are the important properties of the GEE method:^{8}
Correlation structure. In choosing the best correlation structure, we offer the following general guidelines.^{8} If the size of the panels is small and the data are complete, use the unstructured correlation specification. If the observations within a panel are collected for the same PSU over time, then use a specification that also has time dependence. If the observations are clustered (not collected over time), then use the exchangeable correlation structure. If the number of panels is small, then the independence model may be the best; but calculate the sandwich estimate of variance for use with hypothesis tests and interpretation of coefficients. If more than one correlation specification satisfies the above descriptions, use the QIC measure to discern the best choice. Of course, if there is motivating scientific evidence of a particular correlation structure, then that specification should be used. The QIC measure, like any model selection criterion, should not be blindly followed. Proportional odds model. In the proportional odds model, we consider each category in turn and compare the frequency of response at least up to that point on the ordinal scale to the frequency for all points higher on the scale. The first category is compared to all the rest combined, then the first and second combined are compared to all the rest combined, and so on. In this way, the original table with an I category ordinal scale is converted into a series of I1 sub tables, each with a binary categorization, lower/higher than the point on the scale. We then have three types of variables: the new binary responses variable, indicating more or less on the ordinal scale; a variable indexing the sub tables, corresponding to the points on the ordinal scale; and the explanatory variables. An advantage of this construction is that the interpretation of conclusions is not modified when the number of ordinal response categories is changed. The model is given by logit(θi) = θi + xθ. Where θ1 = π1, probability of first ordered category; θ2 = π1 + π2, probability of first or second ordered category; θi = π1 + π2 + π3 +. . . + πi , probability of first or second or ith ordered categories. Thus we allow the intercept to be different for different cumulative logit functions, but the effect of the explanatory variables will be the same across different logit functions. That is, we allow different θ's for each of the cumulative odds, but only one set of θs for all the cumulative odds. This is the proportionality assumption and this is why this type model is called proportional odds model. Also notice that although this is a model in terms of cumulative odds, we can always recover the probabilities of each response category. Discussion The different statistical methods are used to analyze the categorical data in different situations. Each method has its limitations, and to overcome that another method is used. In clinical trials, most of these methods play a very important role in the analysis. Before performing the statistical analysis we need to check the assumptions and study the situation. When the data is of categorical nature, these methods can help in getting the appropriate results to make the decision on the objectives. A good number of softwares are available to perform these analysis with ease.
Devadiga Raghavendra*
is Associate Biostatistician, email:
pd.raghavendra86@gmail.com *To whom all correspondence should be addressed. References 1. Duolao Wang and Ameet Bakhai. Clinical Trials—A Practical Guide to Design, Analysis, and Reporting (Remedica Publishing, USA, 2006). 2. Alan Agresti, Categorical Data Analysis, 2nd Ed. (John Wiley & Sons, New Jersey, 2002). 3. Douglas G. Altman, Practical Statistics for Medical Research (Chapman & Hall, London, 1991). 4. Michael J. Campbell, Statistics at Square Two, 2nd Ed. (Blackwell, USA, 2006). 5. J. M. Bland and D. G. Altman, "Statistical Methods for Assessing Agreement Between Two Methods of Clinical Measurement," The Lancet (i) 307310 (1986). 6. David W. Hosmer and Stanley Lemeshow, Applied Logistic Regression, 2nd Ed. (John Wiley & Sons, NJ, 2000). 7. "Research Methods II: Multivariate Analysis," Journal of Tropical Pediatrics, 136143 (2009), http://www.oxfordjournals.org/our_journals/tropej/online/ma_chap13.pdf~http://www.oxfordjournals.org/our_journals/tropej/online/ma_chap13.pdf . 8. Maura E. Stokes, Categorical Data Analysis Using the SAS System, 2nd Ed. (John Wiley & Sons, USA, 2003).

