In the Independents day, we want to present you the G–test of independence and Chi-square test of independence:

```DATA cad; INPUT genotype \$ health \$ count; DATALINES; ins-ins no_disease 268 ins-ins disease 807 ins-del no_disease 199 ins-del disease 759 del-del no_disease 42 del-del disease 184 ; PROC FREQ DATA=cad; WEIGHT count / ZEROS; TABLES genotype*health / CHISQ; RUN; ```

The output includes the following:
Statistics for Table of genotype by health

Statistic DF Value Prob
Chi-Square 2 7.2594 0.0265
Likelihood Ratio Chi-Square 2 7.3008 0.0260
Mantel-Haenszel Chi-Square 1 7.0231 0.0080
Phi Coefficient 0.0567
Contingency Coefficient 0.0566
Cramer’s V 0.0567
• 📊The “Likelihood Ratio Chi-Square” is what SAS calls the G–test of independence; in this case, G=7.3008, 2 d.f., P=0.0260.
• 📊The “Chi-Square” on the first line is the P value for the chi-square test; in this case, chi-square=7.2594, 2 d.f., P=0.0265.

Chi-square vs. G–test
The chi-square test gives approximately the same results as the G–test. Unlike the chi-square test, G-values are additive, which means they can be used for more elaborate statistical designs. G–tests are a subclass of likelihood ratio tests, a general category of tests that have many uses for testing the fit of data to mathematical models; the more elaborate versions of likelihood ratio tests don’t have equivalent tests using the Pearson chi-square statistic. The G–test is therefore preferred by many, even for simpler designs. On the other hand, the chi-square test is more familiar to more people, and it’s always a good idea to use statistics that your readers are familiar with when possible. You may want to look at the literature in your field and see which is more commonly used. 