Skip to main content

Appendix D – Survey Method and Analysis

This appendix expands on the Review’s examination and analysis of the Unacceptable Behaviour Survey. It notes the methodology and limitations of the exercise, and presents a brief review of the SEQ (which forms the gender and sex-related harassment section of the surveys).

Methodology, Analysis and Limitations

The administration of the 2011 ADFA Unacceptable Behaviour Survey was organised and conducted within a period of under two weeks by Defence’s Directorate of Strategic Personnel Policy Research (DSPPR), at the request of the Review. The timing, voluntary nature of the 2011 administration of the survey and logistics surrounding the exercise meant that a smaller cohort completed the survey when compared to the Grey Review. The process in 2011 was as follows. In early June the Review requested that the DSPPR conduct the survey, and the DSPPR then received ethics approval for administration. Between 10 June and 14 June, 2011 (the morning of administration) cadets were informed that the survey would be administered, and those with prior academic or medical commitments or approved leave were excused from participating. All others were required to attend a briefing explaining the process, after which they were invited to leave if they did not wish to participate. In 2011, the participation rate was 61.6% of all cadets (N=599), compared with a participation rate of 83% (N=825) in 1998 and 86% (N=837) in 2005.[474]

The Review’s analysis of the Unacceptable Behaviour Survey data is based on the data collected by the survey, an analysis of this data provided to the Review by the DSPPR the Directorate of Workforce Intelligence (DWIntel) (DSPPR Report 5/2011), and an examination of academic surveying literature to support the Review’s analytical approach. In addition, the Review also used supplementary reports and data provided by the DSPPR and DWIntel for items not included in the original reports and analysis it provided the Review, including gender disaggregations for some items, largely in order to make comparisons with the Grey Review’s findings.

The Review staff, along with senior ADFA staff, also received a briefing from the DSPPR and DWIntel on 13 July 2011. The DSPPR and DWIntel staff explained the surveying process and the analysis that they had conducted, and raised several methodological questions and issues about the process. An examination of these issues has aided the Review’s understanding of the survey instrument and its psychometric properties, and improved the quality of the analysis conducted.

DSPPR Report 5/2011 summarised these issues in a section on ‘limitations and caveats,’ suggesting that:

the ADFA Unacceptable Behaviour Survey, specifically the 2011 iteration, may not provide an accurate indication of the prevalence of unacceptable behaviour at ADFA, nor are any comparisons made with previous results valid enough to draw robust conclusions and/or generalisations because:

  • there is distinct inconsistency between the two prevalence indicators yielded by the surveys;
  • valid measures of unacceptable behaviour experiences are hampered by context dependency;
  • a self-selection bias may have contributed to both participation in the survey and the nature of responses; and
  • changes to the survey over time render any useful comparisons negligible.[475]

The Review acknowledges that these factors present complications for analysis, however it disagrees with some of the conclusions that DSPPR Report 5/2011 reaches on account of them. Each will be addressed in turn.

DSPPR Report 5/2011 states that ‘there is distinct inconsistency between the two prevalence indicators yielded by the surveys.’ This refers to the fact that the Unacceptable Behaviour Survey contains a collection of discretely designed sections which employ different methods of data collection, which can yield differing response levels.[476] For example, section 2 asks for opinions of unacceptable behaviour, and employs the ‘direct query’ method, whereas sections 3 and 4 employ the ‘behavioural experiences’ method. The ‘direct query’ method asks respondents direct questions and allows respondents to self-define what constitutes unacceptable behaviour. This method tends to return lower incidence rates than the ‘behavioural experiences’ approach, which presents respondents with a series of behaviours and asks whether they are applicable to the respondent without asking for subjective categorisations.[477] The Review acknowledges that these approaches return inconsistent findings, and supports the suggestion made in the DSPPR’s analysis of the 2008 ADFA Unacceptable Behaviour Survey that ‘a complete survey evaluation and validation’ would improve the quality of the instrument and the information that it captured.[478]

The Review notes from the outset that it believes that the results returned by the behavioural experiences method are more robust than those returned by direct query, and should be used by ADFA in formulating its organisational response to this and future surveys for three reasons:

following Illies (2003), the Review believes that the behavioural experiences method ‘minimizes respondent perceptual bias’ allowing individuals to respond to queries about certain behaviours without first needing to make subjective judgements as to whether they have been subjected to an undefined category such as ‘unacceptable behaviour’ or ‘sexual harassment’.[479]

Section 4 of this survey, which deals with gender and sex-related harassment (the SEQ items utilising the behavioural experiences method), is the one part of the broader instrument which has undergone ‘reliability and validity testing,’ and the SEQ been used in Australian and overseas military contexts since the mid 1990s.[480]

the SEQ items, which the Review uses to conduct comparison over time, form a part of the survey which has essentially remained the same since the Grey Review.

 

For all of these reasons, the Review believes that section 4 of the 2011 ADFA Unacceptable Behaviour Survey, dealing with gender and sex-related harassment behaviours, offers a reasonable basis for examining the rates of unacceptable behaviour experiences in this area.

DSPPR Report 5/2011 notes that ‘valid measures of unacceptable behaviour experiences are hampered by context dependency.’ The Review accepts that context is important in interpreting and analysing the data collected, and in particular notes that there is a discrepancy between the level of those who report experiencing the listed ‘harassment’ and ‘discrimination’ behaviours, and those who consider these behaviours ‘unacceptable’ (e.g. 86.3% of respondents reported experiencing a general harassment or discrimination item but only 44.7% of these reported it as ‘unacceptable’). Rather than making the results of the survey any less worthy of analysis, the Review believes that the reasons for such discrepancies should be examined by ADFA and Defence staff when responding to survey results, and when designing and interpreting future surveys.

DSPPR Report 5/2011 notes that ‘a self-selection bias may have contributed to both participation in the survey and the nature of responses.’ The Review accepts this proposition, but suggests this is the case with any such survey about which little can be done.

DSPPR Report 5/2011 suggests that ‘changes to the survey over time render any useful comparisons negligible.’ The comparison that the Review conducts is careful and controlled, and limited to the sex and gender harassment items, which have essentially remained the same between 1998 and 2011.[481] The Review is examining the reported experiences of a defined cohort of cadets, through the use of a psychometrically assessed instrument (the SEQ), and is not seeking to extrapolate more broadly on the nature of harassment. O’Leary-Kelly et al., note the prominence of the SEQ as a tool for measuring the experience of sexual harassment across the literature, and Gutek et al., argue that ‘researchers who use the same version of the SEQ could establish their own base rate and examine changes over time.’[482] Further, the circumstances in which the 1998 and 2011 surveys were administered, at the time of the Grey and Broderick Reviews, also bear a similarity. On these bases, the Review believes that the comparisons it conducts are sound.

The Review also accepts the limitation, made elsewhere by DSPPR Report 5/2011 that:

group administration in an open area with respondents potentially seated near the individuals responsible for the behaviours they are describing may have negatively influenced perceptions of anonymity and, as such, hindered accurate self-reporting.[483]

In conclusion, the Review accepts the importance of limitations and caveats raised by the DSPPR and DWIntel, and in addressing them, has constructed a robust framework for its analysis.

The SEQ

The gender and sex-related harassment items listed in the 2011, 2005 and 1998 surveys are based on a survey instrument called the Sexual Experiences Questionnaire. The SEQ was first conceived by academic Louise Fitzgerald and her colleagues and students in the 1980s in an attempt to standardise measurement of the nature and extent of sexual harassment in universities and the workforce.[484] It identified sexual harassment items in behavioural terms within five general categories: gender harassment, seductive behaviour, sexual bribery, sexual coercion and sexual assault.[485]The instrument avoided the words ‘sexual harassment’ until its end ‘thus avoiding the necessity for the respondent to make a subjective judgement as to whether or not she had been harassed before she could respond.’[486]

In 1995 Fitzgerald et al., published a theoretical and empirical revision of the SEQ. They reported that the SEQ had been used in a number of educational, occupational and organisational settings, been translated into numerous languages, and used in cross-cultural settings.[487] Fitzgerald et al., refined their conceptual framework, and proposed that sexual harassment was composed of three related dimensions: sexual coercion, unwanted sexual attention and gender harassment.[488] Gender harassment referred to ‘a broad range of verbal and nonverbal behaviours not aimed at sexual cooperation but that convey insulting, hostile, and degrading attitudes about women’; unwanted sexual attention included ‘a wide range of verbal and nonverbal behaviour that is offensive, unwanted, and unreciprocated’; and sexual coercion constituted ‘the extortion of sexual cooperation in return for job-related considerations.’[489]

In the mid 1990s, the SEQ was adapted for use in a military environment, based on over a decade of psychometric research.[490] This version – referred to as the SEQ-DoD – divided gender harassment into sexist hostility (what is generally thought of as gender harassment) and sexual hostility (the more sexually charged elements of gender discrimination).[491] The SEQ-DoD was administered to more than 28,000 U.S. military personnel in 1995, and along with derivatives of this version, has remained a prominent tool for surveying sexual harassment in the U.S. military.[492] The SEQ-DoD was also used in the 1995 Australian Defence Force Sexual Harassment Survey.[493] These SEQ items formed the basis for the 1998 survey of ADFA Cadets used in the Grey Review, and have remained ADFA’s gender and sex-related harassment questionnaire items until 2011.

^Top


[474] The proportional figure for 2005 is arrived at by taking the figure 837, quoted in the 2005 report, and the ADFA Annual Status Report for 2005 which notes that there were 977 cadets that year: see ‘110819 Broderick Review Task 100 and task 80’ provided to the Review by LTCOL N Fox, 19 August 2011. Directorate of Strategic Personnel and Planning Research, A Survey of Experiences of Unacceptable Behaviour at the Australian Defence Force Academy, DSSPR Report x/2005 (2005) p 4; People Strategies and Policy Group Workforce Planning Branch, Australian Defence Force Academy 2011 ADFA Unacceptable Behaviour Survey, DSPPR Report 5/2011, Department of Defence, p 1.

[475] People Strategies and Policy Group Workforce Planning Branch, note 1, p 9.

[476] For a discussion of these different methods in the context of sexual harassment, see R Ilies, N Hauserman, S Schwochas and J Stibal, ‘Reported Incidence Rates of Work-Related Sexual Harassment in the United States: Using Meta-Analysis to Explain Reported Rate Disparities’ (2003) 56(3)Personnel Psychology 607.

[477] The reason for this approach, with particular reference to the SEQ, is explained more fully in LF Fitzgerald, SL Shullman, N Bailey, M Richards, J Swecker, Y Gold, M Ormerod, L Weitzman, ‘The Incidence of Sexual Harassment in Academic and the Workplace’ (1988) 32 Journal of Vocational Behaviour 152, p 157.

[478] Directorate of Strategic Personnel Policy Research, Australian Defence Force Academy 2008 Unacceptable Behaviour Survey, DSPPR Research Report 38/2009, p 3.

[479] R Ilies, note 3, p 610.

[480] The DSPPR’s analysis of the 2008 Unacceptable Behaviour Survey notes that only some elements of the survey ‘such as the SEQ and AUDIT’ have ‘undergone reliability and validity testing’. AUDIT has been removed from the 2011 instrument. Directorate of Strategic Personnel Policy Research, note 5, p 3.

[481] The differences between the SEQ items used in the surveys between 1998 and 2011 include a reorganisation of the wording in item g; the inclusion of ‘email’ in the media by which offensive material could be distributed in item h; the use of ‘gender’ rather than ‘sex’ in item k; and the recasting of item v in more behavioural terms.

[482] AM O’Leary-Kelly, L Bowes-Sperry, CA Bates, E R Lean, ‘Sexual Harassment at Work: A Decade (Plus) of Progress’ (2009) 35(3) Journal of Management 503, p 527; BA Gutek, RO Murphy, B Douma, ‘A Review and Critique of the Sexual Experiences Questionnaire’ (2004) 28(4), Law and Human Behaviour 457, p 472.

[483] People Strategies and Policy Group Workforce Planning Branch, note 1, p 8.

[484] LF Fitzgerald, note 4, p 155.

[485] LF Fitzgerald, above, p 157.

[486] LF Fitzgerald, above.

[487] LF Fitzgerald, M J Gelfland and F Drasgow, ‘Measuring Sexual Harassment: Theoretical and Psychometric Advances’ (1995) 17(4) Basic and Applied Social Psychology 425, p 428.

[488] LF Fitzgerald, above, p 430.

[489] LF Fitzgerald, above, pp 430-31.

[490] LF Fitzgerald, V J Magley, F Drasgow, C R Waldo, ‘Measuring Sexual Harassment in the Military: The Sexual Experiences Questionnaire (SEQ-DoD)’ (1999) 11(3) Military Psychology 243, p 243.

[491] LF Fitzgerald, above, p 261. Sexist hostility items include e, h, i and k on the comparative table included, while sexual hostility items include a, b, c, d, f, g, l, m.

[492] Updated survey instruments draw heavily on the SEQ-DoD while seeking to tailor its administration and methodology in ways more sensitive to the military environment. See a discussion of the 19 behaviour items from the 2002 Status of the Armed Forces Surveys: Workplace and Gender Relations, the 2004 Workplace and Gender Relations Survey of Reserve Component Members and the 2005 Service Academies Sexual Assault Surveys in RN Lipari, M Shaw and LM Rock, ‘Measurement of Sexual Harassment and Sexual Assault Across Three US Military Populations’ (2005), Defense Manpower Data Centre, pp 6-9. At www.internationalmta.org/Documents/2005/2005106P.pdf (viewed 19 July 2011); and a discussion of the Sexual Harassment Core Measure (SHCore), identified in 2011 as the U.S. military’s ‘chosen measure’ and ‘a 12-item derivative of the Sexual Experiences Questionnaire’ in M Murdoch, JB Pryor, JM Griffin, DC Ripley, GD Gackstetter, ‘Unreliability and Error in the Military’s “Gold Standard” Measure of Sexual Harassment by Education and Gender’ (2011) 12(3) Journal of Trauma & Dissociation 216, p 218.

[493] MAJ K Quinn, Sexual Harassment in the Australian Defence Force, Department of Defence, (1996). Atwww.defence.gov.au/fr/reports/SHinADF.pdf (viewed 10 August 2011).