FROM MY ARCHIVE: RESEARCH ON QUALITY ASSURANCE AT THE REGIONAL LEARNING CENTRE KIMBERLEY, CENTRAL UNIVERSITY OF TECHNOLOGY, FREE STATE
[Dr Karel Johannes de Beer] RESEARCH ON QUALITY ASSURANCE AT THE REGIONAL LEARNING CENTRE KIMBERLEY, CENTRAL UNIVERSITY OF TECHNOLOGY, FREE STATE
RESEARCH ON QUALITY ASSURANCE AT THE REGIONAL LEARNING CENTRE KIMBERLEY, CENTRAL UNIVERSITY OF TECHNOLOGY, FREE STATE CJ DUVENHAGE KJ DE BEER 2006 ABSTRACT Due to the new Higher Education landscape in South Africa, Universities of Technology will have to adapt to the changing socio-political scene on and off their campuses. This implies that public accountability is becoming the most significant vehicle of government policy and an integrated part of decision-making models. Subsequently, it also implies part time services to distance campuses or at regional learning centres too. In 2004, the Higher Education Quality Assurance Committee (HEQC), audited the Kimberley Regional Learning Centre of the CUT exactly according to the universal standards related to quality assurance for part time or so called distance learners. These universal concepts entail self-evaluation practices, selection approaches, efficiency and performance and of course, public accountability. In follow up reports to the HEQC what has been done to ensure better services to part time distance learners, the manager of the Regional Learning Centre and the director for distance learning launched an ongoing research project on quality assurance to find answers to problems that the HEQC audit has revealed. The following article contains the very first phase of an ongoing process to research universal standards that implies universal standards for other universities of technology as well. In short, to scientifically benchmark standards. 1. DESCRIPTION OF THE CONTENT Evaluation is a sub-function of management, also indicating universal relationship between planning and other functions of management. It serves the purpose, inter alia, of identifying the strengths and weaknesses of institutions, and assists leaders/ managers in their planning and decision-making processes. It also helps them to modify and develop scenarios to accomplish institutional goals within their respective societies. Consequently, evaluation processes have to include ongoing programmes of self-analysis by means of which an institution continuously gathers information about itself. It should also be a deliberate process used by the institution to conduct assessment of its own activities to determine discrepancies, and to suggest and implement corrective measures and improvements. Self-evaluation is also a pragmatic process. Therefore, the design of the self-evaluation process can be adapted to suit the specific circumstances of any institution. It is done in relation to the campus climate and external environment especially at its distance campuses. In some instances the South African situation could be compared with the rapidly changing global landscape where distance education learners are expected to comply with the expectations of multicultural labour market. Simultaneously, international competition in labour markets will force Higher Education Institutions, which includes Universities of Technology, to improve the general quality of their diplomats and graduates, because comparative quality between various countries will become the main issue, and quality assessment, the main problem. The first phase of quality assurance at institutions of Higher Education normally includes the establishment of independent self-evaluation programmes. External evaluation with regard to co-operative partners in Higher Education could be referred to as phase two. All Universities of Technology in South Africa are evaluated by the Higher Education Quality Committee of the external validation and provision for quality education and training to coincide with a National qualification Framework. This part refers to phase three of quality assurance in the research paper. 2. INTRODUCTION Universities of Technology realise the pressure of public accountability borne by distant learners, employers and the government. The control of costs, elimination of duplication (and in some cases, unique options which are perceived to be too costly) and evidence of other efficiencies are aspects to be considered by legislative authorities and agencies regulating Universities of Technology. Similarly, demands for greater productivity in Universities of Technology offering Distance Education will continue to be made with greater frequency than at any time in the past. Along with the focus on accountability comes pressure to adopt “the business model” with its greater emphasis on “the bottom line.” (Koorts: 1996) Staff productivity is definitely part of the issue, but there is an increasing concern for distant learner productivity and more attention to such aspects as dual contact hours and academic support. The consumers and other partners in Higher Education have become much more sophisticated. They look for accountability, but also seek quality. They are more likely to define quality in the language of the “quality improvement movement”, that is, satisfaction with customer resources as represented by the size of libraries, staff-to-learner ratios, and the number and size of grants and contracts won by staff components. They evaluate increased competition among Higher Education providers, because the outputs must be to their advantage as consumers. They expect a market bounding with competitive pricing (tuition) and differentiation (quality). (Ibid.) However, the problem of massification in Distance Education is becoming a major obstacle in the way of quality education. Negative socio-economic factors in South Africa contribute to a worsening future scenario. Research conducted by the Joint Enrichment Programme found that 34 percent of all young people are “marginalized or very marginalized” (that is, in prison, unemployed or unskilled, and with little prospect of legal employment). A further 43 percent are in danger of becoming marginalized. Population projections estimate that over 50 percent of the population will soon be under 21 years of age. This means that serious work needs to be done if these young people are to have a secure future. (Cullinan: 1996) According to Tugend (1996) a financial crises could also compel for example British Universities to terminate the tradition of free Higher Education and to retrench staff while the impact of enrolment increases. Similar demands for free Higher education at, and easier access to institutions in South Africa, such as University of Technology’s Distance education, are also becoming more insistent. In comparison, “it is a rather gloomy prospect” as G. Roberts, Vice-chancellor of the University of Sheffield and Chairperson of the Vice-chancellors’ Committee in Britain, phrased it.(Ibid.) In view of the above, Universities of Technology will have to adapt to the changing socio-political scene on and off their campuses. Koorts (1996) warns that in general, Universities of Technology are compelled to review the strengths and weaknesses of their curricula, instructional methods, research and community service through quality leadership. This should be done in conjunction with other partners in Higher Education. 3. RELATIONSHIPS WITHIN INSTITUTIONAL SELF-EVALUATION The general idea of evaluation models is to improve institutional self-evaluation. However, due to the accelerated societal changes in the modern world, Universities of Technology are further compelled to take into consideration those self-evaluation models which are aimed at achieving the relationships between strategy, the environment, the reaction of management, the quality system and institutional improvements in other Higher Education institutions with holistic quality mechanisms. Kells and Van Vught (19988: 71) stipulate the following goals: Needs assessment Economic analysis Basic research Small-scale testing Fields evaluation Policy analysis Fiscal accountability Impact assessment Selecting from the varying models that have been researched, the University of Technology Free State and the University of the Free State, for example, have implemented some elements or sub-elements of the following manuals: The Institutional Self-Study Model The S-W-O-T Analysis Model The Institutional Effectiveness Cycle Model A suggested Self-evaluation Model of Kells for Hogeschool Grootennieuw. After attending collaborative workshops with Kells during his visit to the University Free State in 1992, the University of Technology Free State also embarked on a naturalistic selection approach in view of establishing a Total Quality Management (TQM) programme. In accordance with the current transformation process in Higher Education in South Africa, the University of Technology Free State is implementing most of the sub-elements of his suggested model, in order to achieve Quality Education Management. (Vermeulen, 2001) cf. 4. APPROACHES TO INSTITUTIONAL SELF-EVALUATION: Various approaches to institutional self-evaluation exist, of which the following are typical examples: A democratic approach An autocratic approach A bureaucratic approach A naturalistic selection approach 4. 1 THE NATURALISTIC SELECTION APPROACH Universities of Technology could also for example, give preference to the naturalistic selection approach, as it includes some elements of a democratic approach, while simultaneously avoiding autocratic and bureaucratic pitfalls. Kells also refers to the naturalistic inquiry process as the “natural selection model” by means of which specific types of institutions succeed in surviving, while it makes life intolerable for others. “like biological species, organizations are supposed to go through a process with three stages: namely variation, selection and retention.” (Kells and Van Vught 1988: 24) Retention in this sense means that certain institutions are only successful if their environment does not change. However, when the environment changes drastically – e.g. in the South African socio-economic and political scenario – will be faced with the problem of responding to their new circumstances. (cf. Ibid: 25) It could even be compared with “horizontal reduction” or “management of decline”. In this scenario radical political changes compel administrators to merge and consolidate academic forces for basic survival. (cf. Berdahl et al. 1991: 93) “Ethnographic portrayal” (cf. Lewy 1991: 223) in institutional self-evaluation (ISE) programmes is still of the utmost importance, especially in a South African context. It does not need to be an essentially political enterprise, provided that the common aims encapsulated in a unified perspective only deal with understanding: “…actualities, social realities, and human perceptions that exist unattained by the obtrusiveness of formal measurement or preconceived questions.” Naturalistic inquiry attempts to present true-to-life episodes documented through natural language (own italics) and representing as closely as possible how people feel, what they know and what their concerns, beliefs, perceptions, and understandings are. (Elliott in Lewy 1991: 219) In South Africa, a predominantly new way of thinking has emerged in a similar way as democratic models elsewhere in the world. (cf. Ibid.: 220) This implies that public opinion is becoming the most significant vehicle of the political system and an integral part of decision-making models. Nichols (1989: 127) agrees that the public service and community impact measures that institutions may consider, include the following: Enrolment levels and community participation in distant programme offerings. The extent to which and institution participates in community affairs or makes its social, cultural, and recreational programmes and facilities available to the community. The economic impact of an institution upon its local community. In this way, local industries will be aware of institutional self-evaluation programmes and their efforts to seek financial aid in related research areas of the community (cf. Ibid: 185). Public and private sector demands may also compel institutions to use self-evaluation programmes to re-evaluate their own ideas to adapt to the needs of the society, government budgets, and sponsorships. However, since World War II the rate of change has increased tremendously, causing the so-called pendulum effect. (i.e. industrial demands over production; market needs; population explosion; political changes, and international movements regarding these aspects). According to Berdahl et al. (1991: 60), the prevailing presence of this pendulum and the tension effect does not imply that the concepts of quality and equity are antithetical. “It means that there have been periods in which one or the other has tended to be in the ascendancy in public and academic awareness and commitment”. (Ibid.) South Africa is, for instance, experiencing the tensional period inherent to the pendulum effect, due to the legacy of a particular political era. Mel Holland, Information Officer to the National Education Co-ordinating Committee of the African National Congress of South Africa, says that education is a “linear process”. Therefore it needs to be reformed, proceeding from the primary to tertiary levels. This is necessary, as Higher Education standards are too high for Black grade twelve’s with a backlog. (Star, 18 January 1992) In this respect Universities of Technology should realise that they are not neutral in the education process. Dr R.R. Arndt of the South African Centre for Science Development is also convinced of an international move away from an academic towards an economic linkage in Higher Education self-evaluation and strategic planning. This trend will have to become a reality in the New South Africa as well. (Arndt, 1992) The significance for distance education and especially the modern approach towards Open Learning (OL) which is now replacing former concepts for part time and distance learners, it implies even greater public accountability for Higher Education Institutions (HEIS) This is because the boundaries between learners, the work place and HEIS become more obscured in compulsory experiential learning (read Co-operative Education) where full time employers who are enrolled as part time or distance learners are actually linking the real world of work to the theory of the lecture room. Also in the case where full time employers are appointed as contracted part time lecturers at HEIS. 5.PARTNERSHIPS AND PHASES OF INSTITUTIONAL SELF- EVALUATION The first phase of quality assurance at institutions of Higher Education in general is normally to install independent self-evaluation programmes in which the criteria for quality vary according to the goals of a specific course, programme or institution, similar to the situation at most Western European Universities (cf. De Weert, 1990: 68). De Weert also agrees that institutions should be allowed to shape and pursue their own objectives, which implies that evaluation should be initiated as an internal process. De Weert’s formulation for phase one, however, does not exclude the necessity of external evaluation by partners in Higher Education. External evaluation could be referred to as phase two. This is an inevitable phase, because without external evaluation, there is little motivation for self-evaluation to affect institutional improvement (Cook 1988: 7 in Ibid 1990: 69). External evaluators (partners) could also be arranged by the institutions themselves during phase two. Gevers (1985: 146) refers to it as “the middle layer of review”. The time lapse between internal and external evaluation allows the institution an opportunity to correct deficiencies. Kells and Van Vught (1988) describe is as “an institution-centred model of quality control, in which ongoing collaborative studies and an institutional self-study process are the main elements…” (Van Vught in De Weert 1990: 69). As such, it will guarantee the enhancement of quality. The central idea is to eventually legitimise ISE programmes which will be externally validated by outside agencies. It implies a system of accreditation to be rounded off in phase three. (cf. Gevers 1985: 146) A comparison between phases one, two and three revealed that the role of the government or accrediting body is limited only to one third of the evaluation process. Yet accreditation is still the most important phase, because it legitimises internally and externally defined goals at individual, institutional and societal level. (cf. De Weert, 1990: 69; Rosenbloom 1981: 13) With reference to minimum government interference, South African Higher Education institutions, with their autonomous academic status, also share the general idea of European universities that “quality control is considered to be a logical consequence of a reduction of governmental interference in the internal matters of both the university and the non-university sector”. (De Weert 1990: 67; cf. Acherman 1990: 191) 5.1 QUALITY ASSURANCE Only as early as since 1978/9 a number of books have been published in the United Kingdom, the USA, Canada and Australia on the aims, values and goals of Higher Education. (cf. Barnett 1990: 4-8) Research quality in Higher Education has only recently captured the attention of researchers. (cf. Brodigan 1991: 1) To assess quality, universities have to use their innermost judgement. Barnett (190; 114 and 121) says: “we can only know our institutions of Higher Education by coming to understand the quality of their internal life.” Subsequently, the following aspects of quality assurance in a quoted British White Paper by Kells (1991)(also cf. Council for Higher Education, 2004) are applicable examples for further reference: Quality control: mechanisms within institutions for maintaining and enhancing the quality of their provision. Quality audit: external scrutiny aimed at providing guarantees that institutions have suitable quality control mechanisms in place. Validation: approval of courses by a validation body for the award of its degrees and other qualifications (cf. Sensicle 1991; 13-14) Accreditation: in the specific context of the Council for National Academic Awards (CNAA), delegation to institutions, subject to certain conditions of responsibility for validating their own courses, leading to CNAA degrees (cf. Williams 1991: 1-3). Quality assessments: external review of and judgments on, the quality of teaching and learning in institutions (cf. Kells: et al. 1991: 14-15) to compare ISE in Holland and the United States of America). However, Acherman (1990: 182) explains that, although we are living in an era of retrenchments and in the case of South Africa, Affirmation action, the goal of quality control should not be used for reasons of retrenchment, but rather for quality improvement. This is because institutions are supposed to have already paid attention to quality assurance themselves, and many of them have done so in one way or another. A system of quality assurance on a national scale should therefore be used, complementary to “systems” that already exist at institutional level. Hence, it will become a system of “external” quality assurance, alongside “internal” ones within the institutions. 5.2 EFFICIENCY AND PERFORMANCE The distinction between effectiveness and efficiency is important for self-evaluation. Lindsay (1982: 29) says: “the evaluation of institutional or sub-unit “performance” can be regarded as having two components, namely the assessment of the level of effectiveness and the assessment of the level of efficiency. Due to its academic mission and Higher Education character, a University of Technology’s inputs cannot always be measured in classic management terms of productivity. Bull (1990: 35) also says that “… a university is different and cannot be managed solely by industrial/commercial methods. It would be a mistake to turn brilliant academics into wheeler-dealers”. The goals of Universities of Technology and other HEIS differ mutually. Therefore it could be compared with institutions in Australia who also change with the passage of time and their goals are different for various groups. The same principle applied in South Africa, but in more rigorous terms. Here different interest groups may be in conflict and the issue of factional conflicts and goals cannot be readily accommodated by clear-cut self-evaluation processes. (cf. Lindsay 1982: 29; Tertiary Development News: November 1991) The classical management perspective of self-evaluation still provides a useful formula to evaluate effectiveness, as comparing outcomes with goals, in order to correct deviations from intended goals. Lindsay qualifies this approach as follows: “The outputs of Higher Education are not solely goods whose value is determined in the market place, so the relative value is only partially determined by a price mechanism. Even if a market system existed for all the outputs, it would only provide one way of measuring value. The value of the outputs of Higher Education is a complex and normative issue…” (Lindsay 1982: 30) The danger exists that normative issues could be discarded to suit political expectancies. Especially in South Africa, with its diverse ethnic and cultural groupings, which are highly politicised, it is very difficult to introduce an overall concept or model of “institutional performance”. (cf. Tertiary Development News: November 1991) Research and study in institutional evaluation are too manifold. Evaluation committees themselves should monitor and guide the political development aspects within their environmental context. In this regard Miller warns against the pressure to adopt naive solutions to sensitive matters. The best defence is a good offensive – better ideas and approaches. (Miller 1981: 89) 5.3 INSTITUTIONAL AND PUBLIC ACCOUNTABILITY: Institutional accountability describes the specific terms of student outcomes assessment. Especially in the United States, governments and accrediting agencies are demanding that HEIS should document the progress and learning achievements of their learners. Due to these external demands, HEIS in the USA are now attempting to be more self-regarding and sensitive to learner outcomes in evaluating and designing courses, programmes and services (Clagett and McConochie, 1991). Due to interactive research in quality assurance between South African HEIS and USA counterparts after 1994, this phenomenon eventually became standard practice in South Africa too. Besides their internal accountability, Higher Education institutions also have an external accountability towards the community, the public media and the government of the day. Especially from the perspective of the State, ISE plays a central role in processing public accountability. At Finnish universities, one of the main conclusions of their Project for Improving the Managerial Effectiveness of the University was that “developing evaluation should be simultaneously approached from the perspectives of public accountability and organizational learning. A Higher education institution is a “bottom heavy” organization. In order to be effective, the initiative in educational innovations and in responses to external changes must originate from disciplinary department. Careful self-evaluation would also serve the requirement of public accountability”. (Holtta and Pulliainen: Higher Education Management 1991: 310) Accountability in Higher Education implies that institutions are accountable to at least three different groups, namely the clients (learners and employers), the society (government) and the subject (professionals and colleagues) Both learners and the employers of graduates who desire the highest degree of professional competence, are the clients and subjects. Higher Education Institutions exist to safeguard and transmit a cultural heritage. Society needs assurance that universities and Universities of Technology are not failing in this obligation. Higher Education staff is responsible for upholding their accountability for their disciplines to their professional colleagues (Ibid). Because there is also the danger of moving too fast with a performance process, Higher education institutions in the UK, for instance, rather favour a system of turning accountability into staff development. (cf. Bull 1990: 45; Parington 1990: 20-21) 5.4 NEW PARTNERS Proposals S14, S15 and S16 of the National Commission for Higher Education which eventually laid the foundation for the National Plan for Higher education (2001) (1996) introduced new partners to Higher Education in South Africa. Proposal S14 suggests a single qualification framework for all Higher Education Qualifications, as part of the National Qualifications Framework (NQF). It implies that it should include intermediate exit qualifications within multiple-year qualifications and should consist of a multilevel set of qualifications, from Higher Education certificates and diplomas, through bachelor’s degrees and advanced diplomas, to master’s and doctoral degrees. The Committee of Universities of Technology Principals (CTP) supported the proposal, but with the provision that the identity of the institution should not be sacrificed in order to facilitate the structure of the NQF articulation. It is the CTP’s opinion that the implementation of articulation becomes more complex as the student progresses, because a stage is reached during which a commitment to a particular career must be made. The impact that this proposal will have on Universities of Technology can only be ascertained once it is fully structured. It undoubtedly implies that new curricula will have to be compiled to fit in with the framework, making the process of articulation easier. It should also comply with the agreements concluded with local advisory committees. According to proposal S15 all Higher Education programmes should be indicated on the National Qualifications Framework, at least at the level of the relevant qualifications. National Standards Bodies will determine the appropriate form of registration in terms of unit standards used within qualifications. National Standards Bodies should also be assigned the task of ensuring that a coherent multilevel set of qualifications is developed and registered in each subject field. It is vital that this is done in all professional fields where difficulties with articulations are felt most acutely. The CTP commented that they support this proposal, maintaining the same reservations as those applicable to S14. According to Proposal S16 a Higher Education Quality Council should be established and should absorb the former functions of CERTEC (Certification Council of Universities of Technology education), and those proposed for the Quality Promotion Unit. This Council should be recognised by the South African Qualifications Authority (SAQA) as the umbrella monitoring/auditing body for Higher Education programmes. The Council should consist of three division, namely those responsible for (1) institutional auditing, (2) programme accreditation, and (3) quality promotion, and should be managed by a Board composed of individuals drawn from inside and outside a University of Technology system. The CTP supports this proposal. Fortunately, Universities of Technology have well-structured quality assurance systems in place, linked to the external auditing system already imposed by the former CERTEC. The system has significantly enhanced the standard of education at an University of Technology. The proposal fits in with what is already being done at these institutions CERTEC has now being replaced by the Higher Education Quality Committee (HECQ) and Internal Auditing is done by means of different self-evaluation instruments; programmes are accredited by a National Qualifications Framework (NQF) after external evaluation thereof. Now, Universities of Technology must continually assess whether their developing quality assurance system fits in with the criteria which will be set by (SAQA) through the HEQC as well as its functions of institutional auditing, programme accreditation and quality promotion. 6. THE WAY FORWARD All ISE models thus reflect the crucial relationship between self-evaluation and planning (i.e. institutional, departmental strategic, functional or operational planning). The models, which precede planning, are vital to achieve success. The most noticeable aspects of the various diagrams are their continuity and the interdependence of their sub-elements. When analysing both descriptions and models of ISE, the fact that the professional practice of evaluation is not “a single whole thing” immediately comes to the fore (Hart in Kells and Van Vught 1988: 76). It is because the field of evaluation “encompasses all the higher functions of the human mind: sensory experience, observation, analysis, categorisation, comparison, synthesis, interpretation, judgement, problem-solving, and decision-making.” (Ibid) It is also because evaluation embodies the metacognitive ideals of Homo sapiens; man’s desire to make sense of the universe, to understand, and control parts of it in contemporary terms. (cf. Ibid) To manifest man’s highest aspirations, academics are forever searching, selecting and enhancing the better ideals of life. Therefore, evaluation becomes the proverbial tool in the hope of improving the quality of human existence by upgrading the Higher Education institutions that so drastically affect the quality of the modern world. (Ibid.) ISE is also used to determine reputational ratings according to available resources, learner outcomes and talent development and becomes enhanced by the added value. (Saunders 1992: 71) The predominant aspects, namely those pertaining to teaching at undergraduate and postgraduate level and research, are to be found in the true mission of Higher Education institutions. However, “there is a tension between the degree of access and the achievement of quality. Adequate financial, human and physical resources will have to compete with the demands of many other pressing needs in the future. The tension can only be met if the system can be developed in such a way and is flexible enough to allow reasonable access in proportion to the resources available”. (Ibid.) There is a real danger, however, of the focus being exclusively on the theoretical aspects of ISE, and of the suggested models not always being sufficient to encompass the context in which the theory could fit. Parsons (1991: 56) warns that by considering an activity apart from the context in which it occurs, it may be relegated to the realm of abstract conceptualisation. ISE therefore “needs to be described in terms of the contextual issues which shape its course and its form. It is vital to establish the contextual parameters, and to illuminate their effects in terms of practical case studies”. A variety of opinions expressed in literature profess that there is no single or standardised method that can effectively access the complexities of Higher Education. (cf. National Association of State Universities and Land Grant Colleges, 1988) Although there seems to be some consensus about the importance of quality assurance, review processes may be subdivided with regard to formative and summative evaluation. The formative evaluation process implies that is still operational with no immediate consequences for micro-, meso-, and macro-reviews. (cf. Gevers 1985: 145) Summative evaluation, however, has operational consequences. It becomes more serious when the internal and external reviews result in linkages with operational effects. In this regard, Gevers (Ibid.: 148) spells out some of the serious conflicts within ISE processes, among other that: “more external evaluation in a really academic environment is always inferior to more internal evaluation. Two principles are in conflict here. The “academic” community’s norms are based on: self-setting of standards; striving for the best; academic freedom and the linked concept of non-interference. On the other side, the managerial approach encompasses non-contamination of interest and judgement; clean and efficient procedures; striving for objective information and decision-making as the only use of information”. The last word has most certainly not been said especially about quality standards University of Technology’s Distance education and this means that there is still many a tale to be told! BIBLIOGRAPHY Acherman, H.A. 1990. Quality assessment by peer review. Higher Education Management 2 (2), pp. 179-192. Arndt, R.R. 1992. The annual opening address at the University of the Orange Free State, Bloemfontein, 5 February 1996. Barnett, R. 1990. The idea of Higher Education. Buckingham: SRHE and Open University Press. Behrdahl, R.O. Moodie, G.C. & Spitzberg, I.J. (Eds.) 1991. Quality and access in Higher Education: Comparing Britain and the United States. Buckingham: SRHE and Open University Press. Brodigan, L. 1991. Focus group interviews: Applications for institutional research in Air Professional File. No. 43, Winter 1992. Bull, I. 1990. Appraisal in universities: A progress report on the introduction of appraisal into universities in the United Kingdom. United Kingdom: CVCP. Clagett, C.A., McConochie, D.D. 1991. Accountability in Continuing Education measuring non-credit student outcomes. Air Professional File. No. 42. Cullinan, K. 1996. Commission with a Mission. Democracy in Action 10 (4), p.13. De Weert, E. 1990. A macro-analysis of quality assessment in Higher Education in Higher Education 19, p. 67. Elliot, J. 1991. Changing contexts for educational evaluation: The challenge for methodology. Studies in Educational Evaluation 17, pp. 215-238. Gevers, J.K.M. 1985. Institutional evaluation and review processes. International Journal of Institutional Management in Higher Education 9(2), pp. 145-150. Holtta, S. & Pullianen, K. 1991. Management change in Finnish universities. Higher Education Management 3 (3), pp. 310-321. Kells, H.R., Maassen, P.A.M. & De Haan, J. 1991. Kwaliteitsmanagement in het hoger onderwijs. Utrecht: Lemma. Kells, H.R. & Van Vught, F.A. (Eds.) 1988. Self-regulation, self-study and programme review in Higher Education. Culemborg: Lemma. Koorts, A.S. 1996. Total Quality Management. In-house Seminar on the Distance Education Programmes of University of Technology Free State, 2 September 1996, Bloemfontein. Lewy, A. 1991. Studies in Educational Evaluation. Pergamon Press. New York. Lindsay, A. 1982. Institutional evaluation: Can it contribute to improving university performance? Miller, R.I. 1981. Some concluding remarks. New directions for institutional research 29, pp. 89-91. National Association of State Universities and Land Grant Colleges, Council on Academic Affairs. 1988. Statement of principles on student outcomes assessment. NASULGC. National Commission for Higher Education. 1996. South Africa. National Plan for Higher Education. 2001. South Africa. Nichols, J.O. 1989. Institutional effectiveness and outcomes assessment implementation on campus: A practitioner’s handbook. New York: Agathon. Parsons, P.G. 1991. The Contectual issues of institutional self-evaluation in Report: Conference on Institutional Self-evaluation. Bloemfontein. UOFS. June. Partington, Patricia. 1990. USDTU Update: A progress report on achievements and targets. United Kingdom: CVCP. Policy Advice Report, 2004. Council on Higher Education. Advice to the Minister of Education on Aspects of Distance Education Provision in South African Higher Education. Pretoria. 15 March. Rosenbloom, A.A. 1981. Relationship on the self-study process to institutional effectiveness and accreditation. (S.L.) Saunders, S.J. 1992. Access to and quality in Higher Education. A Comparative study. University of Cape Town. Sensicle, A. 1991. Quality assurance in Higher Education: The Hong Kong initiative. Paper presented at the international conference of the Hong Kong Council of Academic Accreditation on Quality Assurance in Higher Education, Hong Kong, 15-17 July 1996. Star: 18 January 1992. Tertiary Development News: November 1999. Tugend, A. 1996. Financial Crisis May Force British Universities to end Tradition-free Higher Education. The Chronicle of Higher Education. A39, July. University of the Orange Free State. 1991. Report Fifth Conference on Institutional Self-evaluation. June. Bloemfontein: UOFS. Vermeulen, W. 2001. The University of Technology Free State Quality Management and Improvement Strategy. June. Bloemfontein: University of Technology Free State. Williams, P.R. 1991. The CVCP Academic Audit Unit. Paper presented at the international conference of the Hong Kong Council for Academic Accreditation on Quality Assurance in Higher Education, Hong Kong, 15-17 July 1996.
-- Posted By Kallie (Karel Johannes) de Beer to Dr Karel Johannes de Beer at 2/13/2015 07:38:00 AM
|
Labels: CENTRAL UNIVERSITY OF TECHNOLOGY, FREE STATE, RESEARCH ON QUALITY ASSURANCE AT THE REGIONAL LEARNING CENTRE KIMBERLEY
0 Comments:
Post a Comment
<< Home