Saturday, May 2, 2020
Education free essay sample
Sure Start Childrens Centre-Family based centres who not only provide early years education but offer help and support to parents too. Day Nursery-These are independently run businesses. Childminder-You would normally take your child to the childmindersââ¬â¢ home. They can look after upto 6 children but no more than 3 of these can be under the age of 5. Nannie/Live in carer-Would look after your children in your home. There are four main types of State schools which are funded by local government. They all follow the National Curriculum and are monitored by Oftsed. Community School- Community schools are run by the local government, which employs school staff, owns the land and buildings, and sets the entrance criteria that decide which children are eligible for a place. Foundation and Trust schools- Foundation schools are run by a governing body which employs the staff and sets the entrance criteria. Land and buildings are owned either by the governing body or by a charitable foundation. Programmes are evaluated to answer questions and concerns of various parties. The public want to know whether the curriculum implemented has achieved its aims and objectives; teachers want to know whether what they are doing in the classroom is effective; and the developer or planner wants to know how to improve the curriculum product. â⬠¢ McNeil (1977) states that ââ¬Å"curriculum evaluation is an attempt to throw light on two questions: Do planned learning opportunities, programmes, courses and activities as developed and organised actually produce desired results? How can the curriculum offerings best be improved? (p. 134). â⬠¢ Ornstein and Hunkins (1998) define curriculum evaluation as ââ¬Å"a process or cluster of processes that people perform in order to gather data that will enable them to decide whether to accept, change, or eliminate something- the curriculum in general or an educational textbook in particularâ⬠(p. 320). â⬠¢ Worthen and Sanders (1987) define curriculum evaluation as ââ¬Å"the formal determination of the quality, effectiveness, or value of a programme, product, project, process, objective, or curriculumâ⬠(p. 22-23). Gay (1985) argues that the aim of curriculum evaluation is to identify its weaknesses and strengths as well as problems encountered in implementation; to improve the curriculum development process; to determine the effectiveness of the curriculum and the returns on finance allocated. â⬠¢ Oliva (1988) defined curriculum evaluation as the process of delineating, obtaining, and providing useful information for judging decision alternatives. The primary decision alternatives to consider based upon the evaluation results are: to maintain the curriculum as is; to modify the curriculum; or to eliminate the curriculum. Evaluation is a disciplined inquiry to determine the worth of things. ââ¬ËThingsââ¬â¢ may include programmes, procedures or objects. Generally, research and evaluation are different even though similar data collection tools may be used. The three dimensions on which they may differ are: â⬠¢ First, evaluation need not have as its objective the generation of knowledge. Evaluation is applied while research tends to be basic. â⬠¢ Second, evaluation presumably, produces information that is used to make decisions or forms the basis of policy. Evaluation yields information that has immediate use while research need not. â⬠¢ Third, evaluation is a judgement of worth. Evaluation result in value judgements while research need not and some would say should not. As mentioned earlier, evaluation is the process of determining the significance or worth of programmes or procedures. Scriven (1967) differentiated evaluation as formative evaluation and summative evaluation. However, they have come to mean different things to different people, but in this chapter, Scrivenââ¬â¢s original definition will be used. 8. 2. Formative evaluation: The term formative indicates that data is gathered during the formation or development of the curriculum so that revisions to it can be made. Formative evaluation may include determining who needs the programme (eg. secondary school students), how great is the need (eg. students need to be taught ICT skills to keep pace with expansion of technology) and how to meet the need (eg. introduce a sub ject on ICT compulsory for all secondary schools students). In education, the aim of formative evaluation is usually to obtain information to improve a programme. In formative evaluation, experts would evaluate the match between the instructional strategies and materials used, and the learning outcomes or what it aims to achieve. For example, it is possible that in a curriculum plan the learning outcomes and the learning activities do no match. You want students to develop critical thinking skills but there are no learning activities which provide opportunities for students to practice critical thinking. Formative evaluation by experts is useful before full-scale implementation of the programme. Review by experts of the curriculum plan may provide useful information for modifying or revising selected strategies. In formative evaluation learners may be included to review the materials to determine if they can use the new materials. For example, so they have the relevant prerequisites and are they motivated to learn. From these formative reviews, problems may be discovered. For example, in curriculum document may contain spelling errors, confusing sequence of content, inappropriate examples or illustrations. The feedback obtained could be used to revise and improve instruction or whether or not to adopt the programme before full implementation. 8. 2. 2 Summative evaluation The term summative indicates that data is collected at the end of the implementation of the curriculum programme. Summative evaluation can occur just after new course materials have been implemented in full (i. e. evaluate the effectiveness of the programme), or several months to years after the materials have been implemented in full. It is important to specify what questions you want answered by the evaluation and what decisions will be made as a result of the evaluation. You may want to know if learners achieved the objectives or whether the programme produced the desired outcomes. For example, the use of a specific simulation software in the teaching of geography enhanced the decision making skills of learners. These outcomes can be determined through formal assessment tasks such as marks obtained in tests and examinations. Also of concern is whether the innovation was cost-effective. Was the innovation efficient in terms of time to completion? Were there any unexpected outcomes? Besides, quantitative data to determine how well students met specified objectives, data could also include qualitative interviews, direct observations, and document analyses How should you go about evaluating curriculum? Several experts have proposed different models describing how and what should be involved in evaluating a curriculum. Models are useful because they help you define the parameters of an evaluation, what concepts to study and the procedures to be used to extract important data. Numerous evaluation models have been proposed but three models are discussed here. 8. 3. 1 Context, Input, Process, Product Model (CIPP Model) Daniel L. Stufflebeam (1971), who chaired the Phi Delta Kappa National Study Committee on Evaluation, introduced a widely cited model of evaluation known as the CIPP (context, input, process and product) model. The approach when applied to education aims to determine if a particular educational effort has resulted in a positive change in school, college, university or training organisation. A major aspect of the Stufflebeamââ¬â¢s model is centred on decision making or an act of making up oneââ¬â¢s mind about the programme introduced. For evaluations to be done correctly and aid in the decision making process, curriculum evaluators have to: â⬠¢ first delineate what is to be evaluated and determine what information that has to be collected (eg. how effective has the new science programme has been in enhancing the scientific thinking skills of children in the primary grades) â⬠¢ second is to obtain or collect the information using selected techniques and methods (eg. nterview teachers, collect test scores of students); â⬠¢ third is to provide or make available the information (in the form of tables, graphs) to interested parties. To decide whether to maintain, modify or eliminate the new curriculum or programme, information is obtained by conducting the following 4 types of evaluation: context, input, process and product. Stufflebeamââ¬â¢s model of evaluation relies on both formative and summative evaluation to determine the overall effectiveness a curriculum programme (see Figure 8. 1). Evaluation is required at all levels of the programme implemented. à Formative and summative evaluation in the CIPP Model a) Context Evaluation (What needs to be done and in what context)? This is the most basic kind of evaluation with the purpose of providing a rationale for the objectives. The evaluator defines the environment in which the curriculum is implemented which could be a classroom, school or training department. The evaluator determines needs that were not met and reasons why the needs are not being met. Also identified are the shortcomings and problems in the organisation under review (eg. sizable proportion of students in secondary schools are unable to read at the desired level, the ratio of students to computers is large, a sizable proportion of science teachers are not proficient to teach in English). Goals and objectives are specified on the basis of context evaluation. In other words, the evaluator determines the background in which the innovations are being implemented. The techniques of data collection would include observation of conditions in the school, background statistics of teachers and interviews with players involve in implementation of the curriculum. ) Input Evaluation (How should it be done? ) is that evaluation the purpose of which is to provide information for determining how to utilise resources to achieve objectives of the curriculum. The resources of the school and various designs for carrying out the curriculum are considered. At this stage the evaluator decides on procedures to be used. Unfortunately, methods for input evaluation are lacking in education. The prevalent practices include committee deliberations, appeal to the professional literature, the employment of consultants and pilot experimental projects. ) Process Evaluation (Is it being done? ) is the provision of periodic feedback while the curriculum is being implemented. d) Product Evaluation (Did it succeed? ) or outcomes of the initiative. Data is collected to determine whether the curriculum managed to accomplish it set out achieve (eg. to what extent students have developed a more positive attitudes towards science). Product evaluation involves measuring the achievement of objectives, interpreting the data and providing with information that will enable them to decide whether to continue, terminate or modify the new curriculum. For example, product evaluation might reveal that students have become more interested in science and are more positive towards the subject after introduction of the new science curriculum. Based on this findings the decision may be made to implement the programme throughout the country. 8. 4. 2 Case Study: Evaluation of a Programme on Technology Integration in Teaching and Learning in Secondary Schools The integration of information and communication technology (ICT) in teaching and learning is growing rapidly in many countries. The use of the internet and other computer software in teaching science, mathematics and social sciences is more widespread today. To evaluate the effectiveness of such a programme using the CIPP model would involve examining the following: Context: Examine the environment in which technology is used in teaching and learning â⬠¢ How did the real environment compare to the ideal? (eg. The programme required five computers in each classroom, but there were only two computer labs of 40 units each for 1000 students) â⬠¢ What problems are hampering success of technology integration? eg. technology breakdowns, not all schools had internet access) â⬠¢ About 50% of teachers do not have basic computer skills Input: Examine what resources are put into technology integration (Identify the educational strategies most likely to achieve the desired result) â⬠¢ Is the content selected for using technology right? â⬠¢ Have we used the right combination of media? (internet, video-clips, etc) Process: Assess how well the implementation works (Uncovers implementation issues) â⬠¢ Did technology integration run smoothly? â⬠¢ Were there technology problems? Were teachers able to integrate technology in their lessons as planned? â⬠¢ What are the areas of curriculum in which most students experienced difficulty? Product: Addresses outcomes of the learning (Gather information on the results of the educational intervention to interpret its worth and merit) â⬠¢ Did the learners learn using technology? How do you know? â⬠¢ Does technology integration enhance higher order thinking? 8. 4. 3 Stakeââ¬â¢s Countenance Model The model proposed by Robert Stake (1967) suggests three phases of curriculum evaluation: the antecedent phase, the transaction phase and the outcome phase. The antecedent phase includes conditions existing prior to instruction that may relate to outcomes. The transaction phase constitutes the process of instruction while the outcome phase relates to the effects of the programme. Stake emphasises two operations; descriptions and judgements. Descriptions are divided according to whether they refer to what was intended or what actually was observed. Judgements are separated according to whether they refer to standards used in arriving at the judgements or to the actual judgements. Antecedents Transactions Outcomes Figure 8. 3 Stakeââ¬â¢s Countenance Model 8. 3. 2 Eisnerââ¬â¢s Connoisseurship Model Elliot Eisner, a well known art educator argued that learning was too complex to be broken down to a list of objectives and measured quantitatively to determine whether it has taken place. He argued that the teaching of small manageable pieces of information prohibits students from putting the pieces back together and applying them to new situations. As long as we evaluate students based on the small bits of information students we will only learn small bits of information. Eisner contends that evaluation has and will always drive the curriculum. If we want students to be able to solve problems and think critically then we must evaluate problem solving and critical thinking, skills which cannot be learned by rote practice. So, to evaluate a programme we must make an attempt to capture the richness and complexity of classroom events. He proposed the Connoisseurship Model in which he claimed that a knowledgeable evaluator can determine whether a curriculum programme has been successful, using a combination of skills and experience. The word ââ¬Ëconnoisseurshipââ¬â¢ comes from the Latin word cognoscere, meaning to know. For example, to be a connoisseur of food, paintings or films, you must have knowledge about and experience with different types of food, paintings or films before you are able to criticise. To be a food critic, you must be a connoisseur of different kinds of foods. To be a critic, you must be aware and appreciate the subtle differences in the phenomenon you are examining. In other words, the curriculum evaluator must seek to be an educational critic.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.