History and development
The Australian Graduate Survey (AGS) is the umbrella term for the Graduate Destination Survey (GDS) and the Course Experience Questionnaire (CEQ). The GDS came first, mainly through the efforts of university careers advisers, to acquire information from recent graduates on the outcomes of their higher education, including employment and labour market status, involvement in further study, and other destinations.
While early developments operated largely at the institutional level, it was suggested in the late 1960s that the newly established Graduate Careers Council of Australia (GCCA, later becoming Graduate Careers Australia or GCA) manage the national collection of data on graduate outcomes. It was not until 1972, however, that support was received for a unified national survey.
The first national GDS was conducted in 1972 by the Careers Service at Australian National University (ANU). This was repeated by ANU in 1973 and responsibility for the GDS moved to the GCCA in 1974 where it has remained since.
By 1979, all major universities and numerous institutes of technology and colleges of advanced education were participating in the survey. In about ten years, the survey had grown from a number of in-house developments at institutions to a major national survey of all Australian university graduates. The GDS has been administered to all recent graduates on an annual basis ever since, throughout the shifts, expansions, unifications and mergers that have defined Australian higher education.
The Course Experience Questionnaire (CEQ) was added in 1992 and updated in 2002.
The Postgraduate Research Experience Questionnaire (PREQ) was added in 1999.
Each graduate is presented with a single survey form containing either the GDS and CEQ (if they are a coursework graduate) or the GDS and PREQ (if they are a research graduate). This bundling allows for administrative economies and simplifies combining the GDS and CEQ or PREQ data sets for analysis that draws on both.
In 2006, given the bundling of the three instruments into the one overarching process, the umbrella-term ‘Australian Graduate Survey’ was adopted to refer to the combined surveys.
The AGS in domestic and international contexts:
Over the last 30 years, the AGS has grown to play a more significant role in shaping our understanding of Australian university education. The GDS, CEQ and PREQ are the largest regular surveys conducted in Australian higher education. Aside from quantitative measures of student success, AGS results provide the only nationally generalisable measures of graduate outcomes in Australian higher education.
The survey results are multifaceted, providing validated measures of destinations and the educational experience. By tapping into core qualities of each student’s learning and development, the surveys provide measures that generalise across divergent educational contexts and can be compared across time. The surveys link the interests of many diverse stakeholders - both within institutions and across the sector - and have become woven into the fabric of conversations about quality in Australian higher education. As a reflection of this, the AGS assumed a new important role in 2005 when key questions were included in the basket of indicators used to allocate the Learning and Teaching Performance Fund (LTPF).
Although the surveys are not extremely large, they are complex. This complexity is underpinned by several important factors.
First, a key reason for this is that they have been conducted for over 30 years. They have been developed and managed by GCA (formerly GCCA) over this time, but they have also evolved and been shaped by systemic and pragmatic forces.
Second, the surveys have been conducted in a highly collegial fashion, with key aspects of their administration being decentralised to all participating institutions. Hence, they are actually conducted as around 45 surveys, each of between 80 to 15,000 graduates. This collaborative arrangement reflects and, in important ways, has sustained the need for the surveys to balance national and institutional interests. This national collaboration is almost an intrinsic part of the surveys.
Third, the surveys are in many ways internationally unique in terms of the role they play in national conversations about Australian higher education. While the USA conducts the Baccalaureate and Beyond Longitudinal Study (NCES 2005), it is a longitudinal sample survey conducted only every five years or so, which is intended to be representative at the national level. In the USA, the intra-institutional analysis of ‘alumni destinations’ far exceeds any national analysis of graduates. The closest equivalent survey appears to be the UK Destinations of Leavers from Higher Education survey (HESA 2005).
Fourth, the surveys cut across many jurisdictions and relate to the needs of many stakeholders. The surveys are conducted in all Australian states and territories. Important players include GCA, graduates, higher education institutions, the Department of Education, Employment and Workplace Relations, Universities Australia, careers and student services agencies and quality agencies.
While the core purpose of each of the AGS components has been sustained over the years, its role within the sector has undergone much change and development. While developed to provide diagnostic information for careers advisors, teachers and students, AGS data was soon being used in more summative ways to develop institutional and national policy and to index the performance of higher education programs. An important move in this direction came in the early 1990s, with the inclusion of both the GDS and CEQ in a system of performance indicators of Australian higher education (Linke 1991; Martin 1994). The inclusion of AGS data in these indicators has helped to embed the surveys into the fabric of conversations about quality of Australian higher education. The significance of the surveys was reflected and further enhanced upon the creation of the Learning and Teaching Performance Fund (DEST 2005), which - for the first time - attaches funding to institutional performance.
The Enhancement Project:
Following the new role of the AGS in the Learning and Teaching Performance Fund and sector requests for further development of the survey process, the GDS Enhancement Project was commissioned. It was conducted over 2005 and early 2006. The overarching aim of the project, which was funded by the Department of Education, Science and Training’s (DEST, now the Department of Education, Employment and Workplace Relations (DEEWR)) Higher Education Innovation Programme, was to review central aspects of the AGS, including survey methods and management and data processing, analysis and reporting. The outcome of the project was a report, which included a range of recommendations for enhancing the survey (download the report from the DEEWR website here). The following discussion considers selected key issues, predating but examined as part of the enhancement project. The full report is available here.
Census or sample?
One of the ongoing debates surrounding the surveys is that they should be administered as a sample survey rather than as a population census. Analysis undertaken in the Enhancement Project (Coates, Tilbrook, Guthrie & Bryant 2005) identified a number of critical problems with the sample survey approach and a number of compelling reasons for conducting a census. From a principled perspective, it involves seeking feedback from all graduates, and thereby gives ‘everyone the opportunity to have a say’.
Other advantages of the census over a sample methodology are:
- Methodologically, a census avoids the need for large and complex samples.
- Compared with a survey, it is transparent, and easier to plan, manage and monitor.
- It provides an incentive to survey managers to collect responses from all graduates in the population.
- It provides sufficient data for the analysis of small subgroups within institutions, without the need for complex oversampling and resampling processes.
- Frameworks can be built into the conduct and analysis of the census which enable estimation of the precision and representativeness of results.
- The census approach is likely to be less demanding on institutional resources and expertise, yet provide more and higher quality returns.
Another common discussion surrounding the surveys is that of separating the administration of the CEQ and PREQ from the GDS. Usually, although not always, arguments are mounted in favour of administering the CEQ and PREQ as students are just completing their courses, and maintaining the current delayed administration of the GDS. It has occasionally been suggested that the GDS should be delayed for at least six to 12 months after graduation.
This issue was analysed as part of the Enhancement Project. It was found that the GDS, CEQ and PREQ should continue to be bundled together and administered as a single survey. There are compelling practical, methodological and substantive reasons why the surveys should be bundled:
- It would be difficult to define populations earlier than is done currently, and to ensure consistency across multiple administrations.
- The response burden of the current form is not sufficient to justify splitting them in two.
- Separation would increase response burden on graduates.
- Separation would nearly double the administrative load.
- Changing the administration time would disrupt time-series analyses.
- Campus-based administration can be more costly than other modes.
- There is value in having space for reflection between course completion and evaluation.
- Most institutions have in-house measures of education quality which provide coincident measures of quality feedback.
Growth and standardisation:
The GCA surveys continue to grow in response to emerging needs of institutions and the sector as a whole. In coming years, it seems likely that they will be further incorporated into institutional planning, research and quality assurance activities. Additional questions may be included to enrich these key national measures of higher education outcomes. The surveys are likely to further develop as key policy instruments, both within institutions and between institutions, interest groups and funding agencies. The surveys have undergone, and may continue to undergo, an adjustment period in reaction to their role in the Learning and Teaching Performance Fund. Survey management and administration has now largely shifted from careers and student services to statistics and planning offices, and this trend is likely to continue. Following their new role in the Learning and Teaching Performance Fund and the Enhancement Project, high-level discussions have taken place over 2006–2008 as to whether administration should be centralised or standardised.
As the enhancement project concluded, further efforts at methodological standardisation are imperative, possibly accompanied by increased forms of auditing. Such movement has implications for the nature of the balance between institutional and national needs. Use of the surveys in institutional marketing activities is likely to increase, particularly in connection with alumni and development activities.
The issue of survey management arose during the conduct of the Enhancement Project, largely due to the inclusion of AGS data in the basket of indicators used to allocate the Learning and Teaching Performance Fund. Most discussions of survey management were focused on the ideas of standardisation and centralisation. The Enhancement Project was not conducted to investigate or evaluate survey management, standardisation and centralisation. It is useful and appropriate, however, to position the Enhancement Project alongside these issues.
While there is much strength and many benefits flowing from the process which has developed over the years, the Enhancement Project has offered an opportunity to renovate and develop Australia’s national graduate surveys. It has developed knowledge and resources required for this to occur. The findings of the Enhancement Project have largely affirmed the integrity and quality of the data, and of many aspects of the methodology, which has evolved and developed over the years. For instance, there do not appear to be significant biases in secured survey responses, the sample appears representative of the population, most CEQ scales appear reliable, and coding processes appear robust.
The Enhancement Project also documented specific ways in which survey processes and outcomes could be strengthened. Institutions, for instance, should all use the same survey form, should do more to enhance survey engagement, should harmonise their survey distributions, should be more consistent in their selection of CEQ scales, and should resource the surveys to appropriate levels. Standardisation in such areas, it was found, would help ensure the cogency of survey outcomes.
The key point is that AGS methods should be standardised in ways that are likely to enhance the authority, validity, consistency and efficiency of the surveys.
It is important to separate questions about ‘standardisation’ from questions about ‘centralisation’. While ‘standardisation’ implies a uniform survey methodology with appropriate checks and balances, ‘centralisation’ implies having the surveys conducted by a single organisation, perhaps at a single location.
While current support for standardisation is timely given trends in Australian higher education, it is important to stress the real strength of the existing partially decentralised AGS process. The process has adapted and evolved over 30 years to provide internationally unparalleled insights into the destinations and course experiences of university graduates. The approach has been robust to many changes in national and institutional systems, partly due to its disaggregated and flexible nature, and partly due to the position of GCA within the sector. The approach has balanced institutional interests with the need for a stable and efficient national survey cycle.
As noted, there are many proven strengths in an appropriately managed partially centralised model. There are, conversely, many uncertainties surrounding centralisation:
- Who would resource centralised survey management?
- Would all institutions relinquish some or all control of their GDS, CEQ and PREQ surveys?
- Is centralised management possible given privacy laws?
- Would centralisation affect the level and quality of survey response?
- What change management issues for institutions are associated with centralisation?
- Is centralisation the best means of sustaining independent institutions as stakeholders in a truly national survey?
In the absence of timely and unequivocal answers to these questions, it may be more prudent to initially focus on identifying key areas of existing difference in terms of institutions’ survey methodology, and ways to further monitor and standardise the process. Sector discussions regarding a new model for the AGS have attempted to address these issues.
Moving forward: a new model for the AGS:
In the wake of the enhancement project, key methodological issues were taken forward for further discussion by the Survey Reference Group (SRG), Universities Australia (UA), institutional Survey Managers and the wider sector during 2006 and 2007.
Feedback from this process was further discussed by a specially convened joint working party consisting of representatives from GCA, a DVC/PVC (Academic) sub-committee nominated by UA and representatives from the DEEWR.
Following resolution of key issues by the Working Party in early 2008, major elements of a new survey model were progressively implemented from the 2009 AGS, beginning with the October 2008 survey round.
These key elements encompassed a process which GCA and the working party believed would not only ensure the validity, consistency and verifiability of the AGS, but will also enhance the authority of the survey results. Some areas of the new model still required refinement, such as the inclusion of pre-populated information onto the questionnaire and post-population of questions to be removed from the form. These particular aspects will be introduced as voluntary measures for the 2010 AGS (October 2009, April 2010 rounds) but adopted for all particpating institutions from October 2010. GCA has been working to determine an approach to these aspects of the new model that meet the requirements of standardisation, but are also achievable by all participating institutions.
Coates, H., Tilbrook, C., Guthrie, B., & Bryant, G. (2006). Enhancing the GCA National Surveys: An examination of critical factors leading to enhancements in the instrument, methodology and process. Canberra: Department of Education, Science and Training.
DEST (Department of Education, Science and Training). (2005). Learning and Teaching Performance Fund. Canberra: DEST.
HESA (Higher Education Statistics Agency). (2005). Destinations of Leavers from Higher Education. Cheltenham: HESA.
Linke, R. D. (1991). Report of the Research Group on Performance Indicators in Higher Education. Canberra: DETYA.
Martin, L. M. (1994). Equity and General Performance Indicators in Higher Education. Canberra: Australian Government Publishing Service.
NCES (National Center for Education Statistics) (2005). Baccalaureate and Beyond Longitudinal Study. Washington: NCES.