logging in or signing up Methods of data collection riazmarakkar Download Post to : URL : Related Presentations : Let's Connect Share Add to Flag Embed Email Send to Blogs and Networks Add to Channel Copy embed code: Embed: Flash iPad Dynamic Copy Does not support media & animations Automatically changes to Flash or non-Flash embed WordPress Embed Customize Embed URL: Copy Thumbnail: Copy The presentation is successfully added In Your Favorites. Views: 1567 Category: Science & Tech.. License: All Rights Reserved Like it (1) Dislike it (0) Added: June 06, 2011 This Presentation is Public Favorites: 0 Presentation Description No description available. Comments Posting comment... Premium member Presentation Transcript Methods of data collection: Methods of data collection RIAZ.K.MSTUDY TYPES : STUDY TYPES Non-intervention studies in which the researcher just observes and analyses researchable objects or situations but does not intervene; and Intervention studies in which the researcher manipulates objects or situations and measures the outcome of his manipulations (e.g., by implementing intensive health education and measuring the improvement in immunization rates.)non-intervention studies : non-intervention studies Exploratory studies Descriptive studies Comparative (analytical) studies1. Exploratory studies: 1. Exploratory studies An EXPLORATORY STUDY is a small-scale study of relatively short duration, which is carried out when little is known about a situation or a problemSlide 5: A national Acquired Immunodeficiency Syndrome (AIDS) Control Programme wishes to establish counselling services for Human Immunodeficiency Virus (HIV) positive and AIDS patients, but lacks information on specific needs patients have for support. To explore these needs, a number of in-depth interviews are held with various categories of patients (males, females, married, single) and with some counsellors working on a programme that is already under way. When doing exploratory studies we describe the needs of various categories of patients and the possibilities for action. We may want to go further and try to explain the differences we observe (e.g., in the needs of male and female AIDS patients) or to identify causes of problems. Then we will need to compare groups.2. Descriptive studies: 2. Descriptive studies A DESCRIPTIVE STUDY involves describing the characteristics of a particular situation, event or case. Descriptive studies can be carried out on a small or larger scale.Slide 7: (1) Small scale, descriptive case studies (2) Large scale, cross-sectional surveys3. Comparative or analytical studies: 3. Comparative or analytical studies An ANALYTICAL STUDY attempts to establish causes or risk factors for certain problems. This is done by comparing two or more groups, some of which have or develop the problem and some of which have not.DIMENSIONS OF DATA COLLECTION: DIMENSIONS OF DATA COLLECTION structure, quantifiability, researcher obtrusiveness, and objectivity.structure, : structure, Research data for quantitative studies are often collected according to a structured plan qualitative studies rely almost exclusively on unstructured or loosely structured methods of data collectionQuantifiability: Quantifiability statistical analysisResearcher Obtrusiveness: Researcher Obtrusiveness a program is being evaluated and participants have a vested interest in the evaluation outcome; Participants are engaged in socially unacceptable or atypical behavior; participants have not complied with medical and nursing instructions; and Participants are the type of people who have a strong need to “look good.” When researcher obtrusiveness is unavoidable under these circumstances, researchers should make an effort to put participants at easeObjectivity: Objectivity Objectivity refers to the degree to which two independent researchers can arrive at similar “scores” or make similar observations regarding the concepts positivism usually strive for a reasonable amount of objectivity. naturalistic paradigm, the subjective judgmentTYPES OF DATA COLLECTION METHODS: TYPES OF DATA COLLECTION METHODS Self-Reports Observation Biophysiologic MeasuresSelf-Reports: Self-Reports Questioning people Example patients’ perceptions of hospital care, their preoperative fears, or their health-promoting habits The vast majority of nursing studies involve data collected by self-report.Slide 17: Advantages Directness Versatility can gather retrospective data Gather projections about behaviors Disadvantages validity and accuracy of self-reportsObservation: Observation Observations can be made in laboratory or in natural settings directly through the human senses or with the aid of technical apparatusSlide 19: Observational methods can vary in degree of structure Observational research is particularly well suited to nursing ethical difficulties distorted behavior observer biases Emotions, prejudices, attitudes, and values of observers Personal interest and commitment Anticipation of what is to be observed Hasty decisions before adequate informationBiophysiologic Measures: Biophysiologic Measures Physiologic and physical variables strength objectivity. relative precision and Sensitivity Weakness Technical failuresQuantitative Data Qualitatively: Quantitative Data Qualitatively qualitizing dataQualitative Data Quantitatively: Qualitative Data Quantitatively Quantitizing Generating meaning from qualitative data . Documenting and confirming conclusions . Re-presenting data and lives .quantitative study: quantitative study identify data requirements for Testing the hypotheses or addressing the research questions . Describing sample characteristics Controlling extraneous variables Analyzing potential biases . Understanding subgroup effects Interpreting results Checking the manipulation . Obtaining administrative informationSelecting and Developing Instruments: Selecting and Developing Instruments For most constructs, existing instruments are available and should be considered Appropriateness capture your conceptual definition of the variable yield data of sufficiently high qualitySlide 25: Resources . Availability and familiarity Norms and comparability Population appropriateness . Administration issues Reputationpretesting: pretesting stand-alone methodologic study to determine how much time it takes to administer the entire instrument package whether participants find it burdensome. identifying parts of the instrument package that are difficult for pretest subjects to read or understand or that may have been misinterpreted by them Identifying any instruments or questions that participants find objectionable or offensive Determining whether the sequencing of instruments is sensible Determining needs for training data collection staff Determining if the measures yield data with sufficient variabilitydata collection protocols: data collection protocols spell out the procedures to be used in data collection Conditions that must be met for collecting the data Specific procedures for collecting the data, including requirements for sequencing instruments and recording information Standard information to provide participants who ask routine questions about the study. Procedures to follow in the event that a participant becomes distraught or disoriented, or for any other reason cannot complete the data collectionData quality: Data quality Research Personnel Experience . Congruity with sample characteristics Unremarkable appearance Personality . Availability .DATA COLLECTION IN QUALITATIVE STUDIES: DATA COLLECTION IN QUALITATIVE STUDIES ETHNOGRAPHY Types of data Primarily observation and interviews, plus artifacts, documents, photographs, maps, social network diagrams Unit of data collection - Cultural systems Data collection points - Mainly longitudinal Length of time for data collection Typically long, many months or years Data recording Field notes, logs, interview notes/recordingsSlide 30: PHENOMENOLOGY Types of data Primarily in-depth interviews, sometimes diaries other written materials Unit of data collection – Individuals Data collection points - Mainly cross-sectional Length of time for data collection - Typically moderate Data recording- Interview notes/recordingsSlide 31: GROUNDED THEORY Types of data- Primarily individual interviews, sometimes group interviews, observation, participant journals, documents Unit of data collection – Individuals Data collection points- Cross-sectional or longitudinal Length of time for data collection- Typically moderate Data recording- Interview notes/recordings, memoing, observational notesSelf-Report: Self-Report Types of Qualitative Self-Reports Unstructured Interviews Semi structured Interviews- topic guide Focus Group Interviews Joint Interviews Life Histories Oral Histories Critical Incidents Diaries and Journals The Think-Aloud Method Photo Elicitation Interviews Self-Report Narratives on the InternetQUANTITATIVE SELFREPORT INSTRUMENTS: QUANTITATIVE SELFREPORT INSTRUMENTS Interview schedule questionnaire or SAQ (self administered questionnaire) set of questions known as items Open-ended questions Closed-ended (or fixed-alternative ) questionsClosed-ended: Closed-ended Dichotomous questions Have you ever been hospitalized? 1. Yes 2. No Multiple-choice questions How important is it to you to avoid a pregnancy at this time? Extremely important Very important Somewhat important Not importantSlide 35: Cafeteria questions People have different opinions about the use of estrogen replacement therapy for women at menopause. Which of the following statements best represents your point of view? Estrogen replacement is dangerous and should be banned. Estrogen replacement has undesirable side effects that suggest the need for caution in its use. I am undecided about my views on estrogen replacement. Estrogen replacement has many beneficial effects that merits its use. Estrogen replacement is a wonder treatment that should be administered routinely to most menopausal women.Slide 36: Rank-order questions People value different things in life. Below is a list of things that many people value. Please indicate their order of importance to you by placing a “1” beside the most important, “2” beside the second-most important, and so on. ____ Career achievement/work ____ Family relationships ____ Friendships, social interactions ____ Health ____ Money ____ ReligionSlide 37: Forced-choice questions Which statement most closely represents your point of view? What happens to me is my own doing. Sometimes I feel I don’t have enough control over my life.Slide 38: Rating questions On a scale from 0 to 10, where 0 means ”extremely dissatisfied” and 10 means “extremely satisfied,” how satisfied were you with the nursing care you received during your hospitalization? 0 1 2 3 4 5 6 8 9 10 Extremely dissatisfied Extremely satisfiedSlide 39: Checklists encompass several questions that have the same response format Calendar questions are used to obtain retrospective information about the chronology of different events and activities in people’s lives Visual analogue scales ( VAS )Likert Scales: Likert Scales A Likert scale consists of several declarative items that express a viewpoint on a topic. Good Likert scales usually include 10 or more statements summated rating scalesSemantic Differential Scales: Semantic Differential Scales measuring psychosocial traits respondents are asked to rate a concept series of bipolar adjectives, such as effective/ineffective, good/bad, important/unimportant, or strong/weak.Developing Structured Self-Report Instruments: Developing Structured Self-Report Instruments Identify data Related constructs should be clustered into separate modules or areas Sequencing modules Introductory comments about the nature and purpose of the study. Discuss critically with experts people who are knowledgeable about questionnaire construction reviewed by someone capable of detecting technical problems, such as spelling mistakes, grammatical errors, and so forth Revised version of the instrument can be pretestedSlide 43: Tips for Wording Questions Clarity Ability of respondents to give information Bias Sensitive information State questions in the affirmativetips: tips State questions in the affirmative form Avoid long sentences or phrases, and avoid technical terms Avoid “double-barreled” questions that contain two distinct ideas. Do not assume that respondents will be aware of, or informed about, issues or questions in which you are interested Avoid leading questions that suggest a particular kind of answer State a range of alternatives within the question itself when possible For questions that deal with controversial opinions or socially unacceptable behavior (e.g., excessive drinking habits, noncompliance with medical instructions), closed-ended questions may be preferred.Slide 45: Impersonal wording of a question is some times useful in minimizing embarrassment and encouraging honesty. Researchers concerned about possible respondent confusion or misinterpretation sometimes conduct cognitive questioning during the pretestTips for Preparing Response Alternatives: Tips for Preparing Response Alternatives Responses options should cover all significant alternatives. Alternatives should be mutually exclusive There should be an underlying rationale for ordering alternatives Response alternatives should not be too lengthyTips for Formatting an Instrument: Tips for Formatting an Instrument Try not to compress too many questions into too small a space Set off the response options from the question or stem itself Give special care to formatting filter questions Avoid forcing all respondents to go through inapplicable questions in an SAQ.Collecting Observational Data: Collecting Observational Data observable phenomena Characteristics and conditions of individuals Activities and behavior Skill attainment and performance Verbal communication Nonverbal communication Environmental characteristicsQUALITATIVE OBSERVATIONAL METHODS: PARTICIPANT OBSERVATION: QUALITATIVE OBSERVATIONAL METHODS: PARTICIPANT OBSERVATION 1. Primarily observation 2. Primarily observation with some participation 3. Primarily participation with some observation 4. Reflective observationOBSERVATIONAL METHODS: STRUCTURED OBSERVATIONS: OBSERVATIONAL METHODS: STRUCTURED OBSERVATIONS Categories and ChecklistsSlide 51: eight categories: (1) seeks information, (2)gives information, (3) describes problem, (4) offers suggestion, (5) opposes suggestion, (6) supports suggestion, (7) summarizes, and (8) miscellaneous.Rating Scales: Rating Scales observers to rate a phenomenon along a descriptive continuum that is typically bipolar. The ratings are quantified for subsequent statistical analysis.Sampling for Structured Observations: Sampling for Structured Observations time sampling Event samplingBIOPHYSIOLOGIC MEASURES: BIOPHYSIOLOGIC MEASURES Basic physiologic processes Physiologic outcomes of nursing care Evaluations of nursing interventions Product assessments Measurement and diagnosis improvement Studies of physiologic correlatesTypes of Bio physiologic Measures: Types of Bio physiologic Measures In vivo measurements An in vitro measureSlide 56: Q methodology (Stephenson, 1975) refers to a constellation of substantive, statistical, and psychometric concepts for research on individuals. Q methodology uses a Q-sort procedure, which involves sorting a deck of cards according to specified criteria.PROJECTIVE TECHNIQUES: PROJECTIVE TECHNIQUES The Rorschach ink blot test is an example of a pictorial projective device . Another example is the Thematic Apperception Test (TAT).Slide 58: Verbal projective techniquesSlide 59: Vignettes are brief descriptions of events or situations to which respondents are asked to react.Advantages of Questionnaires: Advantages of Questionnaires Cost Anonymity Interviewer biasAdvantages of Interviews: Advantages of Interviews Response rates Audience Clarity Depth of questioning Missing information Order of questions Sample control Supplementary dataAssessing Data Quality: Assessing Data Quality Errors of Measurement Obtained score = True score +/- Error X O = X T +/-E Situational contaminants . Transitory personal factors Response-set biases Administration variations . Instrument clarity Item sampling Instrument formatSlide 64: Whatever research design is selected, a primary concern is that the conclusions of the study be VALID and RELIABLE.Slide 65: Validity means that your scientific observations actually measure what they intend to measure (your conclusions are true).Slide 66: Instrument measures what it is intended to measure: Appropriate Meaningful Useful Enables a performance analyst or evaluator to draw correct conclusionsTypes of Validity: Types of Validity Face Content Criterion Concurrent Predictive ConstructSlide 68: Reliability means that someone else using the same method in the same circumstances should be able to obtain the same findings (your findings are repeatable).threats to validity and reliability : threats to validity and reliability selection of the study type and design. data collection (related to the instrument) analysis of the data collectedSlide 71: 1. Confounding factors 2. History 3. Differential subject loss in various groups 4. Selectivity (or bias) in assigning subjects to various groupsStrategies to deal with threats to validity: Strategies to deal with threats to validity Triangulation . Approaching a research problem from different angles (e.g., by selecting complementary study populations or using different research techniques at the same time) Control group . Observing a control group who is not exposed to the risk factor or intervention reduces threats due to unexpected and confounding factors. Appropriate sampling procedures and assignment of subjects to research groups. This reduces threats due to selectivity Before and after measurements . This allows us to assess whether there has been selectivity as well as differential loss of subjects. If there has been an inevitable loss of subjects, it may enable assessment of the dropouts to determine whether they had peculiar characteristics that distinguished them from those who did not drop out contd.Slide 73: Unobtrusive methods of data collection and allowing adaptation time for subjects to get used to being observed or interviewed Careful design and pre-testing of instruments, stressing the participation of health managers, staff and community members, reduce bias due to instrumentation Training of interviewers and standardisation of interview techniques and tools such as questionnaires are also important in reducing this bias. Knowledge of the environment events enables the researcher to be sensitive to external events that could affect validity (i.e., history). In case of an expatriate researcher, local key informants can contribute a lot to the validity of the study. Stratification and matching for confounding variables during the analysis of the resultsSlide 75: Data-collection techniques allow us to systematically collect information about our objects of study (people, objects, phenomena) and about the settings in which they occur.Slide 76: Various data collection techniques can be used such as: Using available information Observing Interviewing (face-to-face) Administering written questionnaires Focus group discussions Projective techniques, mapping, scaling1. Using available information: 1. Using available information Analysis of health information system data, census data, unpublished reports and publications in archives and libraries or in offices at the various levels of health and health-related services The use of key informants Other sources of available data are newspapers and published case histories , e.g., patients suffering from serious diseases, or their relatives, telling their experiences and how they cope2. Observing: 2. Observing Participant observation: The observer takes part in the situation he or she observes. (For example, a doctor hospitalised with a broken hip, who now observes hospital procedures ‘from within’.) Non-participant observation: The observer watches the situation, openly or concealed, but does not participate.Slide 79: Observations can be open (e.g., ‘shadowing’ a health worker with his/her permission during routine activities) or concealed (e.g., ‘mystery clients’ trying to obtain antibiotics without medical prescription).3. Interviewing: 3. Interviewing An INTERVIEW is a data-collection technique that involves oral questioning of respondents, either individually or as a group. Answers to the questions posed during an interview can be recorded by writing them down (either during the interview itself or immediately after the interview) or by tape-recording the responses, or by a combination of both. Interviews can be conducted with varying degrees of flexibility4. Administering written questionnaires: 4. Administering written questionnaires A WRITTEN QUESTIONNAIRE (also referred to as self-administered questionnaire) is a data collection tool in which written questions are presented that are to be answered by the respondents in written form.Slide 82: Sending questionnaires by mail with clear instructions on how to answer the questions and asking for mailed responses; Gathering all or part of the respondents in one place at one time, giving oral or written instructions, and letting the respondents fill out the questionnaires; or Hand-delivering questionnaires to respondents and collecting them later.5. Focus group discussions (FGD): 5. Focus group discussions (FGD) A focus group discussion allows a group of 8 - 12 informants to freely discuss a certain subject with the guidance of a facilitator or reporter.6. Projective techniques: 6. Projective techniques When a researcher uses projective techniques, (s)he asks an informant to react to some kind of visual or verbal stimulus.7. Mapping and scaling: 7. Mapping and scaling Mapping is a valuable technique for visually displaying relationships and resources. Scaling is a technique that allows researchers through their respondents to categorise certain variables that they would not be able to rank themselves.Differentiation between data collection techniques and data collection tools: Differentiation between data collection techniques and data collection toolsSlide 87: Advantages and disadvantages of various data collection techniquesMeasurement Concepts: Measurement Concepts Operational Definition : is the definition of a variable in terms of the actual procedures used by the researcher to measure and/or manipulate it. Similar to a ‘recipe,’ operational definitions specify exactly how to measure and/or manipulate the variables in a study. Good operational definitions define pro-cedures precisely so that other researchers can replicate the study.Slide 90: Impulsivity was operationalized as the total number of incorrect stimulus responses Two doses of alcohol were used: 5g/kg and 10g/kg Alcohol dependence vulnerability was defined as the total score on the Michigan Alcohol Screening Test (MAST; Selzer, 1971)Measurement Error: Measurement Error A participant’s score on a particular measure consists of 2 components: Observed score = True score + Measurement Error True Score = score that the participant would have obtained if measurement was perfect—i.e., we were able to measure without error Measurement Error = the component of the observed score that is the result of factors that distort the score from its true valueFactors that Influence Measurement Error: Factors that Influence Measurement Error Transient states of the participants: (transient mood, health, fatigue-level, etc.) Stable attributes of the participants: (individual differences in intelligence, personality, motivation, etc.) Situational factors of the research setting: (room temperature, lighting, crowding, etc.)Characteristics of Measures and Manipulations: Characteristics of Measures and Manipulations Precision and clarity of operational definitions Training of observers Number of independent observations on which a score is based (more is better?) Measures that induce fatigue or fearActual Mistakes: Actual Mistakes Equipment malfunction Errors in recording behaviors by observers Confusing response formats for self-reports Data entry errors Measurement error undermines the reliability (repeatability) of the measures we useReliability: Reliability The reliability of a measure is an inverse function of measurement error: The more error, the less reliable the measure Reliable measures provide consistent measurement from occasion to occasionSlide 96: The degree to which measures obtained with an instrument are consistent measures of what the instrument is intended to measure Sources of error Random error = unpredictable error which is primarily affected by sampling techniques Select more representative samples Select larger samples Measurement error = performance of instrumentTypes of Reliability: Types of Reliability Test-Retest Equivalent Forms Internal Consistency Split-Half Approach Kuder-Richardson Approach Cronbach Alpha ApproachTest-Retest Reliability: Test-Retest Reliability Administer the same instrument twice to the same exact group after a time interval has elapsed. Calculate a reliability coefficient ( r ) to indicate the relationship between the two sets of scores. r of+.51 to +.75 moderate to good r over +.75 = very good to excellentEquivalent Forms Reliability: Equivalent Forms Reliability Also called alternate or parallel forms Instruments administered to same group at same time Vary: Calculate a reliability coefficient ( r ) to indicate the relationship between the two sets of scores. r of+.51 to +.75 moderate to good r over +.75 = very good to excellent Response Set: -- Order -- Wording Stem: -- Order -- WordingInternal Consistency Reliability: Internal Consistency Reliability Split-Half Break instrument or sub-parts in ½ -- like two instruments Correlate scores on the two halves Best to consult statistics book and consultant and use computer software to do the calculations for these tests Kuder-Richardson (KR) Treats instrument as whole Compares variance of total scores and sum of item variances Cronbach Alpha Like KR approach Data scaled or rankedEstimating Reliability: Estimating Reliability Reliability can range from 0 to 1.0 When a reliability coefficient equals 0, the scores reflect nothing but measurement error Rule of Thumb : measures with reliability coefficients of 70% or greater have acceptable reliabilityDifferent Methods for Assessing Reliability: Different Methods for Assessing Reliability Test-Retest Reliability Inter-rater Reliability Internal Consistency ReliabilityTest-Retest Reliability: Test-Retest Reliability Test-retest reliability refers to the consistency of participant’s responses over time (usually a few weeks, why?) Assumes the characteristic being measured is stable over time—not expected to change between test and retestInter-rater Reliability: Inter-rater Reliability If a measurement involves behavioral ratings by an observer/rater, we would expect consistency among raters for a reliable measure Best to use at least 2 independent raters, ‘blind’ to the ratings of other observers Precise operational definitions and well-trained observers improve inter-rater reliabilityInternal Consistency Reliability: Internal Consistency Reliability Relevant for measures that consist of more than 1 item (e.g., total scores on scales, or when several behavioral observations are used to obtain a single score) Internal consistency refers to inter-item reliability , and assesses the degree of consistency among the items in a scale, or the different observations used to derive a score Want to be sure that all the items (or observations) are measuring the same constructEstimates of Internal Consistency: Estimates of Internal Consistency Item-total score consistency Split-half reliability : randomly divide items into 2 subsets and examine the consistency in total scores across the 2 subsets (any drawbacks?) Cronbach’s Alpha : conceptually, it is the average consistency across all possible split-half reliabilities Cronbach’s Alpha can be directly computed from dataEstimating the Validity of a Measure: Estimating the Validity of a Measure A good measure must not only be reliable, but also valid A valid measure measures what it is intended to measure Validity is not a property of a measure, but an indication of the extent to which an assessment measures a particular construct in a particular context —thus a measure may be valid for one purpose but not another A measure cannot be valid unless it is reliable, but a reliable measure may not be validEstimating Validity: Estimating Validity Like reliability, validity is not absolute Validity is the degree to which variability (individual differences) in participant’s scores on a particular measure, reflect individual differences in the characteristic or construct we want to measure Three types of measurement validity: Face Validity Construct Validity Criterion ValiditySlide 110: It looks OK Looks to measure what it is supposed to measure Look at items for appropriateness Client Sample respondents Least scientific validity measureFace Validity: Face Validity Face validity refers to the extent to which a measure ‘ appears ’ to measure what it is supposed to measure Not statistical—involves the judgment of the researcher (and the participants) A measure has face validity—’if people think it does’ Just because a measure has face validity does not ensure that it is a valid measure (and measures lacking face validity can be valid)Construct Validity: Construct Validity Most scientific investigations involve hypothetical constructs—entities that cannot be directly observed but are inferred from empirical evidence (e.g., intelligence) Construct validity is assessed by studying the relationships between the measure of a construct and scores on measures of other constructs We assess construct validity by seeing whether a particular measure relates as it should to other measuresSelf-Esteem Example: Self-Esteem Example Scores on a measure of self-esteem should be positively related to measures of confidence and optimism But, negatively related to measures of insecurity and anxietyConvergent and Discriminant Validity: Convergent and Discriminant Validity To have construct validity, a measure should both : Correlate with other measures that it should be related to ( convergent validity ) And, not correlate with measures that it should not correlate with ( discriminant validity )Criterion-Related Validity: Criterion-Related Validity Refers to the extent to which a measure distinguishes participants on the basis of a particular behavioral criterion The Scholastic Aptitude Test (SAT) is valid to the extent that it distinguishes between students that do well in college versus those that do not A valid measure of marital conflict should correlate with behavioral observations (e.g., number of fights) A valid measure of depressive symptoms should distinguish between subjects in treatment for depression and those who are not in treatmentTwo Types of Criterion-Related Validity: Two Types of Criterion-Related Validity Concurrent validity measure and criterion are assessed at the same time Predictive validity elapsed time between the administration of the measure to be validated and the criterion is a relatively long period (e.g., months or years) Predictive validity refers to a measure’s ability to distinguish participants on a relevant behavioral criterion at some point in the futureSAT Example: SAT Example High school seniors who score high on the the SAT are better prepared for college than low scorers ( concurrent validity ) Probably of greater interest to college admissions administrators, SAT scores predict academic performance four years later ( predictive validity )Traditional data collection methods: Traditional data collection methods Mailing paper questionnaires to respondents, who fill them out and mail them back Having interviewers call to respondents on the telephone and ask them the question in a telephone interview Sending the interviewers to the respondent’s home or office to administer the questions in face-to-face (FTF) interviewsAlternatives methods of data collection: Alternatives methods of data collection Face to face Telephone Mail CATI computer assisted telephone interviewing CAPI computer assisted personal interviewing TDE Touchtone data entry OCR/ICR Optical/intelligent caracter recognition FAX Disk by Mail E-mail Web Computerised Self Administered Questionnaires IVR Interactive voice response SAQ Self administered questionnaire Walkman Text CASI Audio CASI Video CASIAlternatives methods of data collection (a): Alternatives methods of data collection (a) OCR/ICR Optical/intelligent caracter recognition FAX Disk by Mail E-mail Web MailAlternatives methods of data collection (b): Alternatives methods of data collection (b) CATI computer assisted telephone interviewing TDE Touchtone data entry IVR Interactive voice response TelephoneAlternatives methods of data collection (c): Face to face CAPI computer assisted personal interviewing SAQ Self administered questionnaire Walkman Text CASI Audio CASI Video CASI Alternatives methods of data collection (c)DESIGNING A QUESTIONNAIRE: DESIGNING A QUESTIONNAIREQUESTIONS AND ANSWERS IN SURVEYS: QUESTIONS AND ANSWERS IN SURVEYS A questionaire is a standardised set of questions administered to the respondents in a survey Respondents are required to interpret a preestablished set of questions and to supply the information these questions seek.Problems in answering survey questions: Problems in answering survey questions Failure to encode the information sought Misinterpretation of the questions Forgetting and other memory problems Estimation strategies Problems in formatting answer More or less deliberate misreporting Failure to follow instructionFORMATTING THE ANSWER: FORMATTING THE ANSWER Survey items can take a variety of formats; the most common are: Open-ended qustions that call for numerical answers Closed questions with ordered response scales Closed questions with categorial response options1 - Open-ended qustions that call for numerical answers: 1 - Open-ended qustions that call for numerical answers Now, thinking about your physical health, which includes physical illness and injury, for how many days during the past 30 was your physical health not good? Note that: Open-ended items yield more exact information than closed items2 - Closed questions with ordered response scales: 2 - Closed questions with ordered response scales Would you say that in general your health is: Excellent Very good Good Fair Poor The interviewer is instructed to “please read” the answer categories, but not the number attached to them!3 - Closed questions with categorial response options: 3 - Closed questions with categorial response options Are you: Married Divorced Widowed Separated Never married A member of an unmarried couple Note that : The respondent may not wait to hear or read all the option; they may select the first reasonable answer they consider (primacy effect) The opposite could happen: the last option the interviewer read may be the first one that respondent think about (recency effect)GUIDELINES FOR WRITING GOOD QUESTIONS: GUIDELINES FOR WRITING GOOD QUESTIONS Non sensitive questions about behavior The key problem with many questions about behavios is that respondents may forget some or all of the relevant information, or that their answer may reflect inaccurate estimate In order to reduce memory problems it is essential to play attention to the wording of the question and to provide memory help Attitude questions Attitude questions are a very commen class of survey questions. The most frequent problems deals with the wording of questions, the question order and the format of response scalesNon sensitive questions about behavior Play attention to the wording: Non sensitive questions about behavior Play attention to the wording With closed questions, include all reasonable possibilities as explicit response options Are you: Married Divorced Widowed Separated Never married Are you: Married SingleNon sensitive questions about behavior Play attention to the wording: Non sensitive questions about behavior Play attention to the wording Make the question as specific as possible (about who it covers, what time period, which behaviours…) Over the last month, that is ….. how often do you read a newspaper in a tipical week? In a tipical week, how often do you read a newspaper?Non sensitive questions about behavior Play attention to the wording: Non sensitive questions about behavior Play attention to the wording Use words that virtually all respondents will understand Have you ever had a heart attack? Have you ever had a miocardial infarction?Non sensitive questions about behavior Provide memory help: Non sensitive questions about behavior Provide memory help Please look cerefully at the following list of volountary organisations: which, if any, do you belong to? A Religious organisations B Cultural organisations C Political groups D Other To which volountary organisation do you belong to?Attitude questions Play attention to the wording: Attitude questions Play attention to the wording Clearly specify the attitude object of interest Do you think the Government is spending too litte, about the right amount, or too much on higher education? Do you think the Government is spending too litte, about the right amount, or too much on education?Measure the strength of the attitute using a response scale, a separate item or multiple items that can be combined into a scale : Measure the strength of the attitute using a response scale, a separate item or multiple items that can be combined into a scale Do you agree or disagree with the following statement? Government is spending too little on education 1 Agree strongly 2 Agree 3 Neither agree nor disagree 4 Disagree 5 Disagree stronglyAttitude questions reduce impact of question order: Attitude questions reduce impact of question order When asking general and specific questions about a topic, ask the general question first (otherwise, the answer to the general question is likely to be affected by the number and content of specific questions)Slide 138: When asking general and specific questions about a topic, ask the general question first Please tell me whether or not you think it should be possible for a pregnant woman to obtain a legal abortion if: the woman wants it for any reason ? 1. Yes 2. No 3. Don't know Please tell me whether or not you think it should be possible for a pregnant woman to obtain a legal abortion if: there is a strong chance of a serious defect in the baby ? 1. Yes 2. No 3. Don't know You do not have the permission to view this presentation. In order to view it, please contact the author of the presentation.