Web page criteria design for pre school, adragogy and cooperative learning


Web page criteria for Andragogy

Adults have different learning styles than children. Web design for adult should restrict the use of graphic or pictorial materials, printed information not in long page or text, avoid textured background, use larger font size, not use background and prints combination's of red and green and blue and yellow. The web design should provided a link for a complete text on audio clip, provide text, or signing for video clips, provide both visual and auditory warnings. Web design for adult should furnish keyboard commands, increase the size of graphic links, double space between list of vertical links, avoid form with small boxs, and avoid functions that require a response in a specified amount of time. It is because of adult characteristic, and adult physical different that should be consider when designing a web page.


Web page criteria for Pre-School
Web design for Pre-School should be very colorful background, many of animation or nice picture, the icon is very attractive, colorful fonts, interactive activity for their exercise or learn and have many sound to make kids enjoyed to use a web page learning.


Web page criteria for cooperative learning almost same with andragogy web page because cooperative learning usually used for teaching methods in higher education.

Implementation of Learning Theories into webpages

What is learning theories ?


A learning theory is an attempt to describe how people and animals learn, thereby helping us understand the inherently complex process of learning. Learning theories have two chief values according to Hill (2002). One is in providing us with vocabulary and a conceptual framework for interpreting the examples of learning that we observe. The other is in suggesting where to look for solutions to practical problems. The theories do not give us solutions, but they do direct our attention to those variables that are crucial in finding solutions.


There are three main categories or philosophical frameworks under which learning theories fall: behaviorism, cognitivism, and constructivism.


Behaviorism focuses only on the objectively observable behavior aspects of human learning. In essence, three basic assumptions are held to be true. First, learning is manifested by a change in behavior. Second, the environment shapes behavior. And third, the principles of contiguity (how close in time, two events must be for a bond to be formed ) and reinforcement (any means of increasing the likelihood that an event will be repeated ) are central to explaining the learning process. For behaviorism, learning is the acquisition of new behavior through conditioning.


Cognitive theories look beyond behavior to explain brain-based learning. Cognitivists consider how human memory works to promote learning. The difference of cognitive learning theory compared with behavioral theory is that cognitivists place much more emphasis on factors within the learner and less emphasis on factors within the environment.


Constructivism views learning as a process in which the learner actively constructs or builds new ideas or concepts. Constructivist believe that knowledge is individually and socially constructed by learners based on their interpretations of experiences in the world. Constructivism itself has many variations, such as active learning, discovery learning, and knowledge building. Aspects of constructivism can be found in self-directed learning, transformational learning, experiential learning, situated cognition, and reflective practice.


INSTRUCTIONAL DESIGN


Instructional Design is the practice of maximizing the effectiveness, efficiency and appeal of instruction and other learning experiences. The process consists broadly of determining the current state and needs of the learner, defining the end goal of instruction, and creating some "intervention" to assist in the transition. Ideally the process is informed by pedagogic or teaching device that makes instruction as well as the instructional material more engaging, effective and efficient.

ID models serve as conceptual, management, and communication tools for analyzing, designing, creating and evaluation guided learning, ranging from broad educational environments to narrow training applications. ID models is a discipline of study :


-Instructional Design simply means using a systematic process to understand a human performance problem, figuring out what to do about it and then doing something about it (McArdle, 1991).
-Instructional Design is the science of creating detailed specifications for the development, evaluation and maintenance of situations which facilitate the learning (Richey, 1986).
-Instructional Design is the entire process of analysis of learning needs and goals and the development of a delivery system to meet the needs (Briggs, 1977).


ADDIE model


The ADDIE model is the generic process traditionally used by instructional designers and training developers. The ADDIE instructional design model provides a step-by-step process that helps training specialists plan and create training programs. This acronym stands for the 5 phases contained in the model:


Analyze - analyze learner characteristics, task to be learned, etc.
Design - develop learning objectives, choose an instructional approach
Develop - create instructional or training materials
Implement - deliver or distribute the instructional materials
Evaluate - make sure the materials achieved the desired goals


Most of the current instructional design models are variations of the ADDIE model.

ASSURE Model
The ASSURE model is helpful for designing courses using different kinds of media. This model assumes that instruction will not be delivered using lecture/text book only. It allows for the possibility of incorporating out-of-class resources and technology into the course materials. This model will be especially helpful for instructors designing online courses. This model emphasizes teaching to students with different learning styles and constructivist learning where students are required to interact with their environment and not passively receive information.

A - Analyze Learners - obtain general entry behaviours of the learner like grades, previous knowledge and learning style

S - State Objective - objective need to be clear and measurable

S - Select Method, Media and Materials - select available materials, modify existing materials and suggest new materials

U - Utilize Media and Material - preview the materials, prepare the materials, evironment and provide the learning experience

R - Require Learner Participation - In-class and follow-up activities, so learner can procecss the information

E - Evaluate and revise - before, during and after instruction, assess learning materials and learning outcomes

Web Evaluation

There are a variety of methods of criteria to consider, ranging from personal, informal methods to the more educational, and formal techniques. There is no one perfect method of evaluating information, rather you must make an inference from a collection of clues or indicators, based on the use you plan to make of your source. We need to evaluated the web because the web was becoming one of the main resources of learning.

We can use traditional evaluation criteria for web evaluation purpose :

External/Internal Criteria

External criteria refers to the who and where of information. In other words, who wrote the article and from where did it come ? When we cannot evaluate the information itself, we can evaluate where it's coming from, and hope that those sources are credible. Internal criteria is using our own expertise, or independent knowledge, to determine if the information is accurate.

Credibility Indicator

Authorship is a major factor in considering the accuracy and credibility of information found on internet. Evaluating credentials of an author involves analyzing the educational background, past writings, expertise, and responsibility he/she has for the information. One should check the knowledge base, skills, or standards employed by the author in gathering and communicating the data. We also must know the publisher of the document and how reputable is the publisher. Determining when the source was published is a necessary step in discerning a site's accuracy. We can look to make sure the source is current or out-of-date for specific topic. Topics which continually change or develop rapidly (sciences) require more current information.

Certain types of formats are more accessible on the Web, and are easier to use. When selecting first rate sites, a variety of qualities should be present. The information should be easy to find and use. The design should be appealing to its audience. The text should be easy to read, not muddled with distracting graphics, fonts, and backgrounds. The site should be well organized and easy to get around. The page should load in a reasonable amount of time and consistently be available. In addition, recognizing spelling errors, grammatical errors, and profanity will assist in evaluating Web site design.

Purpose

Credibility issues are not only related to the material itself, but also to the reader's purpose. Therefore, another method of evaluating information is to consider the viewer's purpose for using the site. For instance, a viewer's purpose might be for their personal interest or for professional or educational reasons. Obviously, all information would need to be accurate and verified in several other types of sources.

Classic Methods of evaluation

- use of experts The expert will check the contain of a software, have a look at the quality of finding fast of information, make an assessment and draw a comparison between different version.
- people test Often use in formative evaluation to test software before selling it or post to user. We look and listen to a person using software directly and analyze their behavior and talking. Finally, we find out what the user has learned.
- use the results of others A good starting point for evaluation is to look at the results of others. Make a meta-study, and collect the first impressions of persons, self-reflections, and their report of used.
- have a look of use in real life We look at the economical value, where, when and why the products is used and we ask if the user is happy with the software.

The evaluation tools that have been designed and used for centuries to evaluate traditional printed resources are not sufficient in assessing the credibility of material found on the Web due to the nature of this vast new medium. However, there are a variety of tools that have been designed to assist in the evaluation of Internet information.

Web evaluation process

Web evaluation process are divided to Summative and Formative evaluation :

Summative Evaluation provides information on the product's efficacy ( it's ability to do what it was designed to do). For example, did the learners learn what they were supposed to learn after using the instructional module. In a sense, it lets the learner know "how they did," but more importantly, by looking at how the learner's did, it helps to know whether the product teaches what it is supposed to teach. Summative evaluation is typically quantitative, using numeric scores or letter grades to assess learner achievement.

Formative Evaluation is a bit more complex than summative evaluation. It is done with a small group of people to "test run" various aspects of instructional materials. For example, we might ask a friend to look over the web pages to see if they are graphically pleasing, if there are errors we've missed, if it has navigational problems. It's like having someone look over during the development phase to help us catch things that we miss, but a fresh set of eye might not. At times, we might need to have this help from a target audience.

What is the difference between a Summative Evaluation and Formative Evaluation ?

Formative evaluation is typically conducted during the development or improvement of a program or product (or person, and so on) and it is conducted, often more than once, for in-house staff of the program with the intent to improve. Summative evaluation, on the other hand, is a method of judging the worth of a program or product at the end of the program activities or product and focus on the outcome.

Usability Evaluation Method

Usability is a quality attribute that assesses that how easy user interface are to use. Usability testing is important because usability is a necessary condition for survival. If a website is difficult to use, people leave. If the homepage fails to clearly state what a company offers and what users can do on the site, people will leave. If a website's information is hard to read or doesn't answer users' key question, they leave.

Usability is defined by five quality components:
1. Learnability - How easy is it for users to accomplish basic tasks the first time they encounter the design ?
2. Efficiency - Once users have learned the design, how quickly can they perform task ?
3. Memorability - When Users return to the design after a period of not using it, how easily can they reestablish proficiency ?
4. Errors - How many errors do users make, how severe are these errors, and how easily can they recover from the errors ?
5. Satisfaction - How pleasant is it to use the design ?

There are generally three types of usability evaluation methods : Testing, Inspection an Inquiry.

1) TESTING
In Usability Testing approach, representative users work on typical tasks using the system (or the prototype) and the evaluators use the results to see how the user interface supports the users to do their tasks. Testing methods include the following:

1) Coaching Method
This technique can be used for usability test, where the participants are allowed to ask any system-related questions of an expert coach who will answer to the best of his or her ability. Usually the tester serves as the coach. One variant of the method involves a separate expert user serving as the coach, while the tester observes both the interaction between the participant and the computer, and the interaction between the participant and the coach. The purpose of this technique is to discover the information needs of users in order to provide better training and documentation, as well as possibly redesign the interface to avoid the need for the questions. When an expert user is used as the coach, the expert user's mental model of the system can also be analyzed by the tester.

2) Co-Discovery Learning
During a usability test, two test users attempt to perform tasks together while being observed. They are to help each other in the same manner as they would if they were working together to accomplish a common goal using the product. They are encouraged to explain what they are thinking about while working on the tasks. Compared to thinking-aloud protocol, this technique makes it more natural for the test users to verbalize their thoughts during the test. This technique can be used in the following development stages: design, code, test, and deployment.

3) Performance Measurement
This technique is to used to obtain quantitative data about test participants' performance when they perform the tasks during usability test. This will generally prohibit an interaction between the participant and the tester during the test that will affect the quantitative performance data. It should be conducted in a formal usability laboratory so that the data can be collected accurately and possible unexpected interference is minimized.

4) Question-asking Protocol
During a usability test, besides letting the test users to verbalize their thoughts as in the thinking aloud protocol, the testers prompt them by asking direct questions about the product, in order to understand their mental model of the system and the tasks, and where they have trouble in understanding and using the system.

5) Remote Testing
Remote usability testing is used when tester(s) are separated in space and/or time from the participants. This means that the tester(s) cannot observe the testing process directly and that the participants are usually not in a formal usability laboratory. The tester can observe the test user's screen through computer network, and may be able to hear what the test user says during the test through speaker telephone.

6) Shadowing Method
During a usability test, the tester has an expert user (in the task domain) sit next to him/her and explain the test user's behavior to the tester. This technique is used when it's not appropriate for the test user to think aloud or talk to the tester while working on the tasks.

7) Teaching Method
During a usability test, let the test users interact with the system first, so that they get familiar with it and acquire some expertise in accomplishing tasks using the system. Then introduce a naive user to each test user. The Novice users are briefed by the tester to limit their active participation and not to become an active problem-solver.

2) INSPECTION

1) Cognitive walkthrough
Cognitive walkthrough involves one or a group of evaluators inspecting a user interface by going through a set of tasks and evaluate its understandability and ease of learning. The user interface is often presented in the form of a paper mock-up or a working prototype, but it can also be a fully developed interface. The input to the walkthrough also include the user profile, especially the users' knowledge of the task domain and of the interface, and the task cases. The evaluators may include human factors engineers, software developers, or people from marketing, documentation, etc. This technique is best used in the design stage of development. But it can also be applied during the code, test, and deployment stages.

2) Future Inspection
This inspection technique focuses on the feature set of a product. The inspectors are usually given use cases with the end result to be obtained from the use of the product. Each feature is analyzed for its availability, understandability, and other aspects of usability. For example, a common user scenario for the use of a word processor is to produce a letter. The features that would be used include entering text, formatting text, spell-checking, saving the text to a file, and printing the letter. Each set of features used to produce the required output (a letter) is analyzed for its availability, understandability, and general usefulness.

3) Heuristic Evaluation
A heuristic is a guideline or general principle or rule of thumb that can guide a design decision or be used to critique a decision that has already been made. Heuristic evaluation is a method for structuring the critique of a system using a set of relatively simple and general heuristics.

4) Pluralistic Walkthrough
At the design stage, when paper prototype is available, a group of users, developers, and human factors engineers meet together to step through a set of tasks, discussing and evaluating the usability of a system. Group walkthroughs have the advantage of providing a diverse range of skills and perspectives to bear on usability problems. As with any inspection, the more people looking for problems, the higher the probability of finding problems.


3) INQUIRY

1) Field Observation
Human factors engineers go to representative users’ workplace and observe them work, to understand how the users are using the system to accomplish their tasks and what kind of mental model the users have about the system. This method can be used in the test and deployment stages of the development of the product.

2) Focus Group
This is a data collecting technique where about 6 to 9 users are brought together to discuss issues relating to the system. A human factors engineer plays the role of a moderator, who needs to prepare the list of issues to be discussed beforehand and seek to gather the needed inthe user. formation from the discussion. This can capture spontaneous user reactions and ideas that evolve in the dynamic group process.

3) Interviews
In this technique, human factors engineers formulate questions about the product based on the kind of issues of interest. Then they interview representative users to ask them these questions in order to gather information desired. It is good at obtaining detailed information as well as information that can only be obtained from the interactive process between the interviewer and interviewee.