Web Evaluation

There are a variety of methods of criteria to consider, ranging from personal, informal methods to the more educational, and formal techniques. There is no one perfect method of evaluating information, rather you must make an inference from a collection of clues or indicators, based on the use you plan to make of your source. We need to evaluated the web because the web was becoming one of the main resources of learning.

We can use traditional evaluation criteria for web evaluation purpose :

External/Internal Criteria

External criteria refers to the who and where of information. In other words, who wrote the article and from where did it come ? When we cannot evaluate the information itself, we can evaluate where it's coming from, and hope that those sources are credible. Internal criteria is using our own expertise, or independent knowledge, to determine if the information is accurate.

Credibility Indicator

Authorship is a major factor in considering the accuracy and credibility of information found on internet. Evaluating credentials of an author involves analyzing the educational background, past writings, expertise, and responsibility he/she has for the information. One should check the knowledge base, skills, or standards employed by the author in gathering and communicating the data. We also must know the publisher of the document and how reputable is the publisher. Determining when the source was published is a necessary step in discerning a site's accuracy. We can look to make sure the source is current or out-of-date for specific topic. Topics which continually change or develop rapidly (sciences) require more current information.

Certain types of formats are more accessible on the Web, and are easier to use. When selecting first rate sites, a variety of qualities should be present. The information should be easy to find and use. The design should be appealing to its audience. The text should be easy to read, not muddled with distracting graphics, fonts, and backgrounds. The site should be well organized and easy to get around. The page should load in a reasonable amount of time and consistently be available. In addition, recognizing spelling errors, grammatical errors, and profanity will assist in evaluating Web site design.

Purpose

Credibility issues are not only related to the material itself, but also to the reader's purpose. Therefore, another method of evaluating information is to consider the viewer's purpose for using the site. For instance, a viewer's purpose might be for their personal interest or for professional or educational reasons. Obviously, all information would need to be accurate and verified in several other types of sources.

Classic Methods of evaluation

- use of experts The expert will check the contain of a software, have a look at the quality of finding fast of information, make an assessment and draw a comparison between different version.
- people test Often use in formative evaluation to test software before selling it or post to user. We look and listen to a person using software directly and analyze their behavior and talking. Finally, we find out what the user has learned.
- use the results of others A good starting point for evaluation is to look at the results of others. Make a meta-study, and collect the first impressions of persons, self-reflections, and their report of used.
- have a look of use in real life We look at the economical value, where, when and why the products is used and we ask if the user is happy with the software.

The evaluation tools that have been designed and used for centuries to evaluate traditional printed resources are not sufficient in assessing the credibility of material found on the Web due to the nature of this vast new medium. However, there are a variety of tools that have been designed to assist in the evaluation of Internet information.

Web evaluation process

Web evaluation process are divided to Summative and Formative evaluation :

Summative Evaluation provides information on the product's efficacy ( it's ability to do what it was designed to do). For example, did the learners learn what they were supposed to learn after using the instructional module. In a sense, it lets the learner know "how they did," but more importantly, by looking at how the learner's did, it helps to know whether the product teaches what it is supposed to teach. Summative evaluation is typically quantitative, using numeric scores or letter grades to assess learner achievement.

Formative Evaluation is a bit more complex than summative evaluation. It is done with a small group of people to "test run" various aspects of instructional materials. For example, we might ask a friend to look over the web pages to see if they are graphically pleasing, if there are errors we've missed, if it has navigational problems. It's like having someone look over during the development phase to help us catch things that we miss, but a fresh set of eye might not. At times, we might need to have this help from a target audience.

What is the difference between a Summative Evaluation and Formative Evaluation ?

Formative evaluation is typically conducted during the development or improvement of a program or product (or person, and so on) and it is conducted, often more than once, for in-house staff of the program with the intent to improve. Summative evaluation, on the other hand, is a method of judging the worth of a program or product at the end of the program activities or product and focus on the outcome.

Usability Evaluation Method

Usability is a quality attribute that assesses that how easy user interface are to use. Usability testing is important because usability is a necessary condition for survival. If a website is difficult to use, people leave. If the homepage fails to clearly state what a company offers and what users can do on the site, people will leave. If a website's information is hard to read or doesn't answer users' key question, they leave.

Usability is defined by five quality components:
1. Learnability - How easy is it for users to accomplish basic tasks the first time they encounter the design ?
2. Efficiency - Once users have learned the design, how quickly can they perform task ?
3. Memorability - When Users return to the design after a period of not using it, how easily can they reestablish proficiency ?
4. Errors - How many errors do users make, how severe are these errors, and how easily can they recover from the errors ?
5. Satisfaction - How pleasant is it to use the design ?

There are generally three types of usability evaluation methods : Testing, Inspection an Inquiry.

1) TESTING
In Usability Testing approach, representative users work on typical tasks using the system (or the prototype) and the evaluators use the results to see how the user interface supports the users to do their tasks. Testing methods include the following:

1) Coaching Method
This technique can be used for usability test, where the participants are allowed to ask any system-related questions of an expert coach who will answer to the best of his or her ability. Usually the tester serves as the coach. One variant of the method involves a separate expert user serving as the coach, while the tester observes both the interaction between the participant and the computer, and the interaction between the participant and the coach. The purpose of this technique is to discover the information needs of users in order to provide better training and documentation, as well as possibly redesign the interface to avoid the need for the questions. When an expert user is used as the coach, the expert user's mental model of the system can also be analyzed by the tester.

2) Co-Discovery Learning
During a usability test, two test users attempt to perform tasks together while being observed. They are to help each other in the same manner as they would if they were working together to accomplish a common goal using the product. They are encouraged to explain what they are thinking about while working on the tasks. Compared to thinking-aloud protocol, this technique makes it more natural for the test users to verbalize their thoughts during the test. This technique can be used in the following development stages: design, code, test, and deployment.

3) Performance Measurement
This technique is to used to obtain quantitative data about test participants' performance when they perform the tasks during usability test. This will generally prohibit an interaction between the participant and the tester during the test that will affect the quantitative performance data. It should be conducted in a formal usability laboratory so that the data can be collected accurately and possible unexpected interference is minimized.

4) Question-asking Protocol
During a usability test, besides letting the test users to verbalize their thoughts as in the thinking aloud protocol, the testers prompt them by asking direct questions about the product, in order to understand their mental model of the system and the tasks, and where they have trouble in understanding and using the system.

5) Remote Testing
Remote usability testing is used when tester(s) are separated in space and/or time from the participants. This means that the tester(s) cannot observe the testing process directly and that the participants are usually not in a formal usability laboratory. The tester can observe the test user's screen through computer network, and may be able to hear what the test user says during the test through speaker telephone.

6) Shadowing Method
During a usability test, the tester has an expert user (in the task domain) sit next to him/her and explain the test user's behavior to the tester. This technique is used when it's not appropriate for the test user to think aloud or talk to the tester while working on the tasks.

7) Teaching Method
During a usability test, let the test users interact with the system first, so that they get familiar with it and acquire some expertise in accomplishing tasks using the system. Then introduce a naive user to each test user. The Novice users are briefed by the tester to limit their active participation and not to become an active problem-solver.

2) INSPECTION

1) Cognitive walkthrough
Cognitive walkthrough involves one or a group of evaluators inspecting a user interface by going through a set of tasks and evaluate its understandability and ease of learning. The user interface is often presented in the form of a paper mock-up or a working prototype, but it can also be a fully developed interface. The input to the walkthrough also include the user profile, especially the users' knowledge of the task domain and of the interface, and the task cases. The evaluators may include human factors engineers, software developers, or people from marketing, documentation, etc. This technique is best used in the design stage of development. But it can also be applied during the code, test, and deployment stages.

2) Future Inspection
This inspection technique focuses on the feature set of a product. The inspectors are usually given use cases with the end result to be obtained from the use of the product. Each feature is analyzed for its availability, understandability, and other aspects of usability. For example, a common user scenario for the use of a word processor is to produce a letter. The features that would be used include entering text, formatting text, spell-checking, saving the text to a file, and printing the letter. Each set of features used to produce the required output (a letter) is analyzed for its availability, understandability, and general usefulness.

3) Heuristic Evaluation
A heuristic is a guideline or general principle or rule of thumb that can guide a design decision or be used to critique a decision that has already been made. Heuristic evaluation is a method for structuring the critique of a system using a set of relatively simple and general heuristics.

4) Pluralistic Walkthrough
At the design stage, when paper prototype is available, a group of users, developers, and human factors engineers meet together to step through a set of tasks, discussing and evaluating the usability of a system. Group walkthroughs have the advantage of providing a diverse range of skills and perspectives to bear on usability problems. As with any inspection, the more people looking for problems, the higher the probability of finding problems.


3) INQUIRY

1) Field Observation
Human factors engineers go to representative users’ workplace and observe them work, to understand how the users are using the system to accomplish their tasks and what kind of mental model the users have about the system. This method can be used in the test and deployment stages of the development of the product.

2) Focus Group
This is a data collecting technique where about 6 to 9 users are brought together to discuss issues relating to the system. A human factors engineer plays the role of a moderator, who needs to prepare the list of issues to be discussed beforehand and seek to gather the needed inthe user. formation from the discussion. This can capture spontaneous user reactions and ideas that evolve in the dynamic group process.

3) Interviews
In this technique, human factors engineers formulate questions about the product based on the kind of issues of interest. Then they interview representative users to ask them these questions in order to gather information desired. It is good at obtaining detailed information as well as information that can only be obtained from the interactive process between the interviewer and interviewee.

0 comments:



Post a Comment