Besides the Z-test to detect significant differences, some other interesting statistics are available for crosstabs.
One of them is the Pearson’s Chi-squared test. This is a so-called ‘goodness of fit’ test. Simply put the test tells you whether the observed results in your crosstab differ a lot from the results you would expect when your variables are independent (in our example when every age group has an identical taste preference). If the calculated Chi-square value is higher than the ‘critical’ Chi-square value you can state that the ‘null hypothesis’ of independent variables can be rejected and this for a specific ‘p-value‘ (probability of rejecting the null hypothesis whilst this is true, often 0,05 is taken for this). In human language this is a strong indication of the existence of actual differences between your subgroups.
We also offer Fisher’s exact test which is familiar with the Chi-square test but can only be applied in specific situations, i.e. when you have a 2×2 table of categorical variables with small cell sizes (expected values less than 5). For larger cell sized the ‘standard’ Chi-square test is recommended.
Also Kendall’s tau-b can be calculated, which is a non-parametric measure of association between columns of ranked data (e.g. ranking of students for 2 different exams). The result is a rank correlation coëfficiënt returning a value of -1 to +1, where 0 is no relationship, +1 a perfect positive relationship (‘concordant‘ pairs > all students are ranked exactly the same for the 2 exams) and -1 a perfect negative relationship (‘discordant‘ pairs > ranking for the 2 exams is completely inverse) .
More detailed info on these tests can be found in every statistical handbook.