Margoth B.G

Margoth B.G

Higher power of the universe!

DIVINITY, please heal within me these painful memories and ideas that are causing negative feelings of disgust and anger inside me. I am Sorry, I Love You, Forgive me, thank you!

Higher Power of the Universe, Higher Power in the Universe, Mayor Power in the Universe. Please take good care of my conscience, unconsciousness, my physical, mental, and spiritual in my present. Protect all members of my family, especially my children and my husband.

Father, Mother, Divine, and Creators Children, all in one, if my family my relatives and ancestors offended their family, relatives and ancestors in thoughts, words and actions from the beginning of our creation to the present. We ask for your forgiveness. Let this be cleaned to purify and released. Cut out all the wrong energies, memories and negative vibrations and transmute these unspeakable energies into pure light and so be it done.

Divine intelligence, heal inside me painful memories in me I are producing this affliction. I am sorry, forgive me, I love you, thank you. So be it! Thank you! Margoth.

DIVINIDAD, por favor sanar dentro de mí estos dolorosos recuerdos e ideas que están causando sentimientos negativos como el disgusto o enojo dentro de mí. Lo sentimos Te Amo Gracias Perdóname.

Poder Superior del Universo, Poder Mayor en el Universo, Poder Alcalde en el universo. Por favor cuida y protege a mi conciencia, Subconsciencia, físico, mental, espiritual y mi presente. Proteger a todos los miembros de mi familia, especialmente a mis hijos y a mi esposo.

Padre, Madre, Divina, e Hijos Creadores, todo en uno, si mi familia mis parientes y antepasados ofendieron a su familia, parientes y antepasados en pensamientos, palabras y acciones realizadas desde el principio de nuestra creación hasta el presente. Pedimos su perdón. Que esto sea limpiado para purificarlo y liberado. Corta todas las energías erradas, recuerdos y vibraciones negativas y transmutar estas energías indecibles en pura luz y que así sea hecho. Inteligencia divinidad, sana dentro de mí los dolorosos recuerdos en mí que me están produciendo esta aflicción. Lo siento, perdóname, te amo gracias. Que así sea! ¡Gracias! Margoth.


my life

my life

Tuesday, January 21

PSYC 205 Science Research Methods

Threats to Construct Validity: Inadequate Pre-operational Explication of constructions. We didn’t clearly define things before we started. Mono-operational Bias. We use a single measure of a construct that is not complete. Mono-method Bias. Using only one approach to measuring a construct. We only use surveys to assess employee engagement they are self-report and subject to bias as self-report. Hypothesis Guessing. Participants try to guess what we are looking for an act differently. Participants learn they are in a pilot program aimed at improving success so they work harder or report better results.

Must due: External validity randomly select people to study (not randomly assigned). Replication even on small scale over time over sample of a study. Clear about how you select people, how do we get people to this discreetness time settings.
Internal Validity: Confidence in cause and effect Requirements. Difference in Dependent Variable. Independent variable before Dependent Variable. Extraneous factors (alterative rival hypotheses. In theory: Two identical groups Pretest-posttest design.
Makes sure I use the interactions:  selection, settings and history. Interactions selection and treatment. Selected have a different reaction to our program than other programs. Make sure you select high potential for our program. Interactions setting and treatment. Results in one setting may not work in other settings. Interactions History and treatment the results we see today may not be true for other times.
http://allpsych.com/researchmethods/researcherror/

https://www.youtube.com/watch?v=wWM64j870kE

If we were to argue that there is a “best kind of research,” which one would it be? Is there a “best kind of research” at all?

A homogeneous sample is one in which the researcher chooses participants who are alike – for example, participants who belong to the same subculture or have similar characteristics.  A homogeneous sample may be chosen with respect to only a certain variable – for instance, the researcher may be interested in studying participants who work in a certain occupation, or are in a certain age group.  Homogeneous sampling can be of particular use for conducting focus groups because individuals are generally more comfortable sharing their thoughts and ideas with other individuals who they perceive to be similar to them.


Relationship between theory and data

 Scientific Method: 

1. Basic versus Applied Research

Goal of describing, predicting, & explaining fundamental principles

 of behavior vs. solving real-life problems

2. Laboratory Research versus Field Research

Research in controlled laboratories vs. uncontrolled or real-life contexts

3. Quantitative versus Qualitative  Research

Descriptive & inferential statistics vs. narrative analysis (e.g., case studies, observational research, interviews)

ASSUMPTIONS ABOUT BEHAVIORS OR OBSERVATIONS:

Social Science Research Methods

In many studies, use is made of pre-existing groups of people. For example, we might compare the performance of males and females, or that of young and middle-aged individuals. Do such studies qualify as genuine experiments? The answer is “No”. Use of the experimental method requires that the independent variable is manipulated by the experimenter, but clearly the experimenter cannot decide whether a given person is going to be male or female for the purposes of the study! What is generally regarded as the greatest advantage of the experimental method is that it allows us to establish cause and effect relationships. In the terms we have been using, the independent variable in an experiment is often regarded as a cause, and the dependent variable is the effect. Philosophers of science have argued about whether or not causality can be established by  experimentation. However, the general opinion is that causality can only be inferred. If y (e.g. poor performance) follows x (e.g. intense noise), then it is reasonable to infer that x caused y.


Social Science Research Methods

Comparative-Historical Methods in Comparative Perspective

Social Science Research Methods
1)    In the scientific community, and particularly in psychology and health, there has been an  active and ongoing debate on the relative merits of adopting either quantitative or
qualitative methods, especially when researching into human behavior (Bowling, 2009;
Oakley, 2000; Smith, 1995a, 1995b; Smith, 1998). In part, this debate formed a component of
the development in the 1970s of our thinking about science. Andrew Pickering has described
this movement as the “sociology of scientific knowledge” (SSK), where our scientific
understanding, developing scientific ‘products’ and ‘know-how’, became identified as
forming components in a wider engagement with society’s environmental and social context
(Pickering, 1992, pp. 1). Since that time, the debate has continued so that today there is an
increasing acceptance of the use of qualitative methods in the social sciences (Denzin &
Lincoln, 2000; Morse, 1994; Punch, 2011; Robson, 2011) and health sciences (Bowling, 2009;
Greenhalgh & Hurwitz, 1998; Murphy & Dingwall, 1998). The utility of qualitative methods
has also been recognized in psychology. Many authors argue that qualitative psychology is much more accepted today and that it has moved from “the margins to the mainstream in psychology in the UK.” (Willig & Stainton Rogers, 2008, pp. 8). Nevertheless, in psychology, qualitative
methodologies are still considered to be relatively ‘new’ (Banister, Bunn, Burman, et al.,
2011; Hayes, 1998; Richardson, 1996) despite clear evidence to the contrary (see, for example,
the discussion on this point by Rapport et al., 2005). Some researchers observes, scanning the
content of some early journals from the 1920s – 1930s that many of these more historical
papers “discuss personal experiences as freely as statistical data” (Hayes, 1998, 1). This can
be viewed as an early development of the case-study approach, now an accepted
methodological approach in psychological, health care and medical research, where our
knowledge about people is enhanced by our understanding of the individual ‘case’ (May &
Perry, 2011; Radley & Chamberlain, 2001; Ragin, 2011; Smith, 1998).
The discipline of psychology, originating as it did during the late 19th century, in parallel
with developments in modern medicine, tended, from the outset, to emphasize the ‘scientific
method’ as the way forward for psychological inquiry. This point of view arose out of the
previous century’s Enlightenment period which underlay the founding of what is generally
agreed to be the first empirical experimental psychology laboratory, established by Wilhelm
Wundt, University of Leipzig, in 1879. During this same period, other early psychology
researchers, such as the group of scientific thinkers interested in perception (the Gestaltists:
see, for example, Lamiell, 1995) were developing their work. Later, in the 20th century, the
introduction of Behaviorism became the predominant school of psychology in America
and Britain. Behaviorism emphasized a reductionist approach, and this movement, until its
displacement in the 1970-80s by the ‘cognitive revolution’, dominated the discipline of
psychology (Hayes, 1998, pp. 2-3). These approaches have served the scientific community
well, and have been considerably enhanced by increasingly sophisticated statistical
computer programs for data analysis.
What’s the difference? Quantitative Research: This type of research is charged with quantifying (measuring and counting) and subscribes to to a particular empirical approach to knowledge, believing that by measuring accurately enough we can make claims about the object at study.
2)    Qualitative Research: This type of research is charged with the quality or qualities of an experience or phenomenon. Qualitative research rejects the notion of their being a simple relationship between our perception of the world and the world itself, instead arguing that each individual places different meaning on different events  or experiences and that these are constantly changing. Qualitative research generally gathers text based data through exercises with a small number of participants, usually semi structured or unstructured interviews.
“Quantitative research is concerned with quantifying (measuring and counting) and subscribes to to a particular empirical approach to knowledge, believing that by measuring accurately enough we can make claims about the object at study. Due to the stringency and ‘objectivity’ of this form of research, quantitative research is often conducted in controlled settings, such as labs, to make sure that the data is as objective and unaffected by external conditions as possible. This helps with the replicability of the study, by conducting a study more than once and receiving the same or similar responses, you can be pretty sure your results are accurate. Quantitative research tends to be predictive in nature and is used to test research hypothesis, rather than descriptions of processes. Quantitative research tends to use a large number of participants, using experimental methods, or very structured psychometric questionnaires.

By contrast qualitative research is concerned with the quality or qualities of an experience or phenomenon. Qualitative research rejects the notion of their being a simple relationship between our perception of the world and the world itself, instead arguing that each individual places different meaning on different events  or experiences and that these are constantly changing. Qualitative research generally gathers text based data through exercises with a small number of participants, usually semi structured or unstructured interviews.”



Adapted from: barkerwordpress.com
3)    Quantitative Research: An Overview
Mathematically based  Often uses survey-based measures to collect data
Often collects data on what is known as a “Likert-scale” a 4-7 point numerical scale which a participant rates agreement  Uses statistical methodology to analyze numerical data  As quantitative research is essentially about collecting numerical
data to explain a particular phenomenon, particular questions seem immediately
suited to being answered using quantitative methods. How many males get a first-class degree at university compared to females? What percentage of teachers and school leaders belong to ethnic minority groups?
Has pupil achievement in English improved in our school district over
time? These are all questions we can look at quantitatively, as the data we
need to collect are already available to us in numerical form. Does this not
severely limit the usefulness of quantitative research though? There are
many phenomena we might want to look at, but which don’t seem to produce
any quantitative data. In fact, relatively few phenomena in education
actually occur in the form of ‘naturally’ quantitative data.
Luckily, we are far less limited than might appear from the above. Many
data that do not naturally appear in quantitative form can be collected in
a quantitative way. We do this by designing research instruments aimed
specifically at converting phenomena that don’t naturally exist in quantitative
form into quantitative data, which we can analyze statistically.
Examples of this are attitudes and beliefs. We might want to collect data
on pupils’ attitudes to their school and their teachers. These attitudes
obviously do not naturally exist in quantitative form (we don’t form our
attitudes in the shape of numerical scales!). Yet we can develop a questionnaire
that asks pupils to rate a number of statements (for example, ‘I think
school is boring’) as either ‘agree strongly’, ‘agree’, ‘disagree’ or ‘disagree
strongly’, and give the answers a number (e.g. 1 for ‘disagree strongly’, 4
for agree strongly). Now we have quantitative data on pupil attitudes to
school. In the same way, we can collect data on a wide number of phenomena,
and make them quantitative through data collection instruments
such as questionnaires or tests. The number of phenomena we can study in this way is almost unlimited, making quantitative research quite flexible. This is not to say that all phenomena are best studied by quantitative methods. As we will see, while quantitative methods have some notable advantages, they also have
disadvantages, which means that some phenomena are better studied by
using different (qualitative) methods.
The last part of the definition refers to the use of mathematically based
methods, in particular statistics, to analyze the data. This is what people
usually think about when they think of quantitative research,
and is often seen as the most important part of quantitative studies. This is a bit of a misconception, as, while using the right data analysis tools obviously matters
a great deal, using the right research design and data collection instruments
is actually more crucial. The use of statistics to analyze the data is, however, the element that puts a lot of people off doing quantitative research, as the mathematics underlying the methods seems complicated and frightening. As we will see later on in this book, most researchers do not really have to be particularly expert in the mathematics underlying the methods, as computer software allows us to do the analyses quickly and (relatively) easily.



Example of Survey Used in Quantitative Research

So as we can see here, this is an example of the, what are known as likert-type scales which are used to numerically quantify a research participant’s response. Each response is assigned a number which can then be entered into a statistical analysis to utilize mathematical principles to generate understandable results based on a smaller sample of people which is taken from the larger population
5) Relationship between the Sample and the Population
6) When to use Quantitative Research
1) When we are looking for a numerical answer
2) When we want to study numerical change
3) When we want to find out about the state of something or to explain a phenomena
4) When we want to test a hypothesis
If we take a pragmatic approach to research methods, the main question
that we need to answer is ‘what kind of questions are best answered by
using quantitative as opposed to qualitative methods?’
There are four main types of research questions that quantitative research
is particularly suited to finding an answer to:
1. The first type of research question is that demanding a quantitative
answer. Examples are: ‘How many students choose to study education?’
or ‘How many math teachers do we need and how many have we got in
our school district?’ That we need to use quantitative research to answer
this kind of question is obvious. Qualitative, non-numerical methods
will obviously not provide us with the (numerical) answer we want.
2. Numerical change can likewise accurately be studied only by using quantitative
methods. Are the numbers of students in our university rising or
falling? Is achievement going up or down? We’ll need to do a quantitative
study to find out.
3. As well as wanting to find out about the state of something or other, we
often want to explain phenomena. What factors predict the recruitment
of math teachers? What factors are related to changes in student
achievement over time? As we will see later on in this book, this kind of
question can also be studied successfully by quantitative methods, and
many statistical techniques have been developed that allow us to predict
scores on one factor, or variable (e.g. teacher recruitment) from scores on
one or more other factors, or variables (e.g. unemployment rates, pay,
conditions).
4. The final activity for which quantitative research is especially suited is
the testing of hypotheses. We might want to explain something – for
example, whether there is a relationship between pupil’s achievement
and their self-esteem and social background. We could look at the theory
and come up with the hypothesis that lower social class background
leads to low self-esteem, which would in turn be related to low achievement.
Using quantitative research, we can try to test this kind of model.
Problems one and two above are called ‘descriptive’. We are merely trying
to describe a situation. Three and four are ‘inferential’. We are trying to
explain something rather than just describe it.
7) Advantages and Disadvantages:  Quantitative
Quantitative Advantages:
Concise
Accurate
Strictly Controlled
Replicable
Can indicate causation
Ideally is objective


Quantitative Disadvantages:
Limited understanding of individuality
Groups people into categories
Can be accused of oversimplifying human nature
When we think about the advantages of quantitative research, the first thing we will acknowledge is that it is the dominant approach in psychological research. Its concise, accurate and can be strictly controlled to ensure that the results are replicable and that causation is established. Quantitative data also has predictive power in that research can be generalized to a different setting. It can also be a lot faster and easier to analyze qualitative data.

When we look to the disadvantages of quantitative research, the first major issue is that quantitative data does not directly recognize the individuality of human beings and can be guilty of grouping people into set categories because its easier to analyze, it can also be accused of oversimplify human nature. This form of research does not recognize the subjective nature of all social research, if we set out to support a hypothesis as quantitative research often does, we aren’t being entirely objective.

While Quantitative research , there are other types of questions that are not
well suited to quantitative methods.

1. The first situation where quantitative research will fail is when we want
to explore a problem in depth. Quantitative research is good at providing
information in breadth, from a large number of units, but when we
want to explore a problem or concept in depth, quantitative methods
can be too shallow. To really get under the skin of a phenomenon, we
will need to go for ethnographic methods, interviews, in-depth case
studies and other qualitative techniques.

2. We saw above that quantitative research is well suited for the testing of
theories and hypotheses. What quantitative methods cannot do very
well is develop hypotheses and theories. The hypotheses to be tested may
come from a review of the literature or theory, but can also be developed
by using exploratory qualitative research.

3. If the issues to be studied are particularly complex, an in-depth qualitative
study (a case study, for example) is more likely to pick up on this
than a quantitative study. This is partly because there is a limit to how
many variables can be looked at in any one quantitative study, and partly
because in quantitative research the researcher defines the variables to be
studied herself, while in qualitative research unexpected variables may
emerge.

4. Finally, while quantitative methods are best for looking at cause and
effect (causality, as it is known), qualitative methods are more suited to
looking at the meaning of particular events or circumstances.


8) Qualitative Research

Focus on “language rather than numbers”
“Embraces “intersubjectivity” or how people may construct meaning…”
Focus on the individual and their real lived experience
Qualitative methods have much to offer when we need to explore people’s feelings or ask
participants to reflect on their experiences. As was noted above, some of the earliest
psychological thinkers of the late 19th century and early 20th century may be regarded as
proto-qualitative researchers. Examples include the ‘founding father’ of psycho-analysis,
Sigmund Freud, who worked in Vienna (late 19th century – to mid 20th century), recorded
and published numerous case-studies and then engaged in analysis, postulation and
theorizing on the basis of his observations, and the pioneering Swiss developmental
psychologist, Jean Piaget (1896 – 1980) who meticulously observed and recorded his
children’s developing awareness and engagement with their social world. They were
succeeded by many other authors from the 1940s onwards who adopted qualitative
methods and may be regarded as contributors to the development of qualitative
methodologies through their emphasis of the importance of the idiographic and use of case
studies (Allport,1946; Nicholson, 1997)1 . This locates the roots of qualitative thinking in the
long-standing debate between empiricist and rationalistic schools of thought, and also in
social constructionism (Gergen, 1985; King & Horrock, pp. 6 – 24)2.
So, what exactly is qualitative research? A practical definition points to methods that use
language, rather than numbers, and an interpretative, naturalistic approach. Qualitative
research embraces the concept of intersubjectivity usually understood to refer to how people
may agree or construct meaning: perhaps to a shared understanding, emotion, feeling, or
perception of a situation , in order to interpret the social world they inhabit (Nerlich, 2004,
pp. 18). Norman Denzin and Yvonna Lincoln define qualitative researchers as people who
usually work in the ‘real’ world of lived experience, often in a natural setting, rather than a
laboratory based experimental approach. The qualitative researcher tries to make sense of
social phenomena and the meanings people bring to them (Denzin & Lincoln, 2000)3.
In qualitative research, it is acknowledged that the researcher is an integral part of the
process and who may reflect on her/his own influence and experience in the research
process.4 The qualitative researcher accepts that s/he is not ‘neutral’. Instead s/he puts
herself in the position of the participant or 'subject' and attempts to understand how the
world is from that person's perspective. As this process is re-iterated, hypotheses begin to
emerge, which are 'tested' against the data of further experiences e.g. people's narratives.
One of the key differences between quantitative and qualitative approaches is apparent
here: the quantitative approach states the hypothesis from the outset, (i.e. a ‘top down’
approach), whereas in qualitative research the hypothesis or research question, is refined
and developed during the process. This may be thought of as a ‘bottom-up’ or emergent
approach, They compare these to assumptions about the world, the knowledge
produced and the role of the researcher (King & Horrocks, 2010).
9) Advantages and Disadvantages: Qualitative

Qualitative Advantages:
Appreciates research participant’s individuality
Provides insider view of research question
Less structured than quantitative approach
Qualitative Disadvantages:
Not always appropriate to generalize results to larger population
Time consuming
Difficult to test a hypothesis
Proponents of qualitative research argue that such methodology see’s people as individuals, attempting to gather their subjective experience of an event. This can provide a unique insider view of the research question  Through the qualitative approach, which is less structured than a quantitative approach, unexpected results and insights can occur.

When we consider the disadvantages of qualitative research we are forced to note that due to the individual, subjective nature of qualitative data, it is often inappropriate or not even possible to make predictions for the wider population. It can be lengthy to analyze, and due to the open ended approach used in qualitative research it can be difficult to test hypothesis.
10) Qualitative Research in Psychology

Today, a growing number of psychologists are re-examining and re-exploring qualitative methods for psychological research, challenging the more traditional ‘scientific’ experimental approach” 
Today, a growing number of psychologists are re-examining and re-exploring qualitative
methods for psychological research, challenging the more traditional ‘scientific’
experimental approach (see, for example, Gergen, 1991; 1985; Smith et al., 1995a, 1995b).
There is a move towards a consideration of what these other methods can offer to
psychology ( Bruner, 1986; Smith et al.,1995a). What we are now seeing is a renewed interest
in qualitative methods which has led to many researchers becoming interested in how
qualitative methods in psychology can stand alongside, and complement, quantitative
methods. This is important, since both qualitative and quantitative methods have value to
the researcher and each can complement the other albeit with a different focus6 (Crossley,
2000; Dixon-Wood & Fitzpatrick, 2001; Elwyn, 1997; Gantley et al., 1999; Rapport et al.,
2005).
11) When to use Qualitative Research

Content and Thematic Analysis
Grounded Theory (Generating Theory from Data)
Discourse and Narrative Analysis
Individual Case Studies
Rigorous research methodologies form a necessary foundation in evidence-based research.
Until recently such a statement has been read as referring solely to quantitative
methodologies such as in the double blind randomized controlled trial (RCT) encountered in
healthcare research. Quantitative methods were designed for specific purposes and were
never intended to take researchers to the heart of patients’ lived experiences. The
experimental, quantitative research methods, such as the RCT, focus on matters involved in
the development of clinical drug trials and assessing treatment outcomes, survival rates,
improvements in healthcare and clinical governance and audit.
Qualitative paradigms, on the other hand, offer the researcher an opportunity to develop an
idiographic understanding of participants’ experiences and what it means to them, within
their social reality, to be in a particular situation (Bryman, 1992). Qualitative research has a
role in facilitating our understanding of some of the complexity of bio-psycho-social
phenomena and thus offers exciting possibilities for psychology in the future. Qualitative
research is developing therefore new ways of thinking and revisions to the more established
methods are constantly being introduced and debated by researchers across the world.
These methods include: Content / thematic analysis (CA/ TA); Grounded Theory in
psychology (GT); Discursive psychology / Discourse analysis (DA) and Narrative psychology
(NA); and the detailed and in-depth case study of an individual.
12) When to use Qualitative Research

Content and Thematic Analysis - Content Analysis, or Thematic Analysis (the terms are frequently used interchangeably and generally mean much the same), is particularly useful for conceptual, or thematic, analysis or relational analysis. It can quantify the occurrences of concepts selected for examination (Wilkinson & Birmingham, 2003).
Content Analysis, or Thematic Analysis (the terms are frequently used interchangeably and
generally mean much the same), is particularly useful for conceptual, or thematic, analysis
or relational analysis. It can quantify the occurrences of concepts selected for examination
(Wilkinson & Birmingham, 2003). CA or TA, has become rather a ‘catch-all term’ (Boyle,
1994), but this approach is useful when the researcher wishes to summarize and categorize
themes encountered in data collection. These can include: summaries of people’s comments
from questionnaires, documents such as diaries, historical journals, video and film footage,
or other material: the list is not exhaustive. The approach is also useful in guiding the
development of an interview schedule. However, this method provides – summaries of
frequency of the content. The method may therefore be considered too limited where an in-depth
approach is required.
Interview data need methods of analysis capable of providing the researcher with greater
insight into participants’ views, the psychological and phenomenological background to
participants’ stories and their narrated experiences and feelings. Other qualitative methods
are explored for utility of purpose here. One such method, originally developed from
sociological research is Grounded Theory (GT).

13) Grounded Theory
Grounded Theory - is frequently considered to offer researchers a suitable qualitative method for in-depth exploratory investigations. . It is a rigorous approach which provides the researcher with a set of systematic strategies and assumes that when investigating social processes qualitatively from the bottom up there will be an emergence of a working theory about the population from the qualitative data (Willig, 2008, pp. 44).
Grounded Theory (GT) is frequently considered to offer researchers a suitable qualitative
method for in-depth exploratory investigations (Charmaz, 1995; Strauss & Corbin, 1990;
Willig, 2008). It is a rigorous approach which provides the researcher with a set of
systematic strategies (Charmaz, 1995). While this method shares some features with
phenomenology, (see below), GT assumes that the analysis will generate one over-arching
and encompassing theory. GT was, in its original version, designed to investigate social
processes from the bottom up, or the “emergence of theory from data” (Willig, 2008, pp. 44).
GT methods developed from the collaboration of sociologists Glazer and Strauss during the
1960s and 1970s (e.g. Glaser & Strauss, 1967). It is a set of strategies that has been of
immense use in sociological research as an aid to developing wider social theory (hence its
name). As Willig observes, GT can be an attractive method for psychologists who have
trained in quantitative methods since the building blocks, identified using the GT approach,
aim to generate categories from the data collected, thus moving from data to theory (Willig,
2008, pp. 34 onward). Its originators, Glaser and Strauss (1967), considered the separation of
theory from research as being a rather arbitrary division. They set about devising an
approach whereby the data collection stage may be blurred or merged with the
development of theory in an attempt to break down the more rigid boundaries between the
usual data collection and data analysis stages. GT approaches data by blurring these
different stages and levels of abstraction. A GT analysis may proceed by checking and
refining the data analysis by collecting more data until ‘data saturation’ can be achieved
(Charmaz, 1996). However, for many psychological investigations, it may be obvious at an
early stage that, due to the complexity of people’s lived experiences, participants’ narratives
about their lives, feelings and/ or emotions, may not always be best served by adopting GT
as a method (i.e. generation of one main theory).
GT was originally developed for researching from a sociological perspective and, while there is
some commonality between sociology and social psychology, the use of GT to analyse data
might not always provide a sufficiently robust and flexible way of capturing psychological
nuances and complexities contained in participants’ narratives about lived experiences. GT,
as a methodology, was therefore adopted and adapted by some qualitative psychologists
(Pidgeon & Henwood, 1997). Willig concludes that GT can be “reserved for the study of
social psychological processes” as a descriptive method (Willig, 2008, pp. 47).A further
challenge, when considering using GT, is the challenge provided by the different types of
GT that have developed within the field such as the debate on the two main ‘schools’ of GT:
Straussian and Glaserian

14) Discursive psychology and Discourse Analysis

Discourse Analysis: The discursive approach looks to verbal behavior as a more direct means of uncovering underlying cognitions (Harré,1995) rather than assigning a numerical value ‘score’ or scale to a behavior. This approach takes the view that interpretation and empathy are involved in attempting to understand human behavior. Self-report, from people being studied, can then become a valuable resource in its own right.
As its name suggests, Discourse Analysis (DA) is primarily concerned with the nuances of
conversation (Potter, 1996). The term ‘discourse’ can cover anything related to our use of
language whether a single utterance or moment of speech (speech fragment) through to a
conversation between two people, or the delivery of a political speech. It may refer to how
language may be systematically ordered as in language ‘rules’ or different conventions such
as medical jargon or legal terminology (Tonkiss, 2012, pp. 406). The ‘turn to language’ in
researching society and in the discursive psychology field has been inspired by theories
emerging from other disciplines and consideration of speech use as both communication
and performance (Seale, 2012). As Willig observes (2008, pp. 95) DA is more than a
methodology, since social scientists have become interested both in how we use language in
communication and also how we ‘socially construct’ our environment and lived experience
by the use of language (see, for example, Bruner, 1986, 1991; Gergen, 2001). It has become
more of a critique of how we describe the world and the nuances of the discourse and
language we use. Discursive psychology highlights how ‘knowledge’ is socially constructed
and reported for example in “existing institutional practices that may be considered unjust.”
(Holt, 2011, pp. 66). Where some psychologists may wish to explore conversation by
exploring the finer nuances of conversation such as the length of a pause, the terms of
speech people use, or other variations of discourse, then DA can be a very useful method
(Potter & Wetherell, 1987; Willig, 2008, pp. 96-106).
The discursive approach looks to verbal behavior as a more direct means of uncovering
underlying cognitions (Harré,1995) rather than assigning a numerical value ‘score’ or scale
to a behavior. This approach takes the view that interpretation and empathy are involved
in attempting to understand human behavior. Self-report, from people being studied, can
then become a valuable resource in its own right.
15) What do we do if we want the best of both worlds?
Mixed-Methods Designs - questionnaire) and qualitative (for example, a number of case studies) methods. Mixed-methods research is a flexible approach, where the research design is determined by what we want to find out rather than by any predetermined epistemological position. In mixed-methods research, qualitative or quantitative components can predominate, or both can have equal status
What, then, do we do if we want to look at both breadth and depth, or at
both causality and meaning? In those cases, it is best to use a so-called
mixed-methods design, in which we use both quantitative (for example, a
questionnaire) and qualitative (for example, a number of case studies)
methods. Mixed-methods research is a flexible approach, where the
research design is determined by what we want to find out rather than by
any predetermined epistemological position. In mixed-methods research,
qualitative or quantitative components can predominate, or both can have
equal status.


Social Science Research Methods
Lecture 6: Quantitative and Qualitative Research Methods
Whats the difference?
Quantitative Research: This type of research is charged with quantifying (measuring and counting) and subscribes to to a particular empirical approach to knowledge, believing that by measuring accurately enough we can make claims about the object at study.
Qualitative Research: This type of research is charged with the quality or qualities of an experience or phenomenon. Qualitative research rejects the notion of their being a simple relationship between our perception of the world and the world itself, instead arguing that each individual places different meaning on different events  or experiences and that these are constantly changing. Qualitative research generally gathers text based data through exercises with a small number of participants, usually semi structured or unstructured interviews.
Quantitative Research: An Overview
Mathematically based
Often uses survey-based measures to collect data
Often collects data on what is known as a Likert-scale a 4-7 point numerical scale which a participant rates agreement
Uses statistical methodology to analyze numerical data
Example of Survey Used in Quantitative Research
Relationship between the Sample and the Population
When to use Quantitative Research
1) When we are looking for a numerical answer
2) When we want to study numerical change
3) When we want to find out about the state of something or to explain a phenomena
4) When we want to test a hypothesis
Advantages and Disadvantages:
Quantitative
Quantitative Advantages:
Concise
Accurate
Strictly Controlled
Replicable
Can indicate causation
Ideally is objective
Quantitative Disadvantages:
Limited understanding of individuality
Groups people into categories
Can be accused of oversimplifying human nature
Qualitative Research
Focus on language rather than numbers
Embraces intersubjectivity or how people may construct meaning…”
Focus on the individual and their real lived experience.
Advantages and Disadvantages: Qualitative
Qualitative Advantages:
Appreciates research participants individuality
Provides insider view of research question
Less structured than quantitative approach
Qualitative Disadvantages:
Not always appropriate to generalize results to larger population
Time consuming
Difficult to test a hypothesis
Qualitative Research in Psychology
Today, a growing number of psychologists are re-examining and re-exploring qualitative methods for psychological research, challenging the more traditional scientific experimental approach
When to use Qualitative Research
Content and Thematic Analysis
Grounded Theory (Generating Theory from Data)
Discourse and Narrative Analysis
Individual Case Studies
Content and Thematic Analysis
Content and Thematic Analysis - Content Analysis, or Thematic Analysis (the terms are frequently used interchangeably and generally mean much the same), is particularly useful for conceptual, or thematic, analysis or relational analysis. It can quantify the occurrences of concepts selected for examination (Wilkinson & Birmingham, 2003).
Grounded Theory
Grounded Theory - is frequently considered to offer researchers a suitable qualitative method for in-depth exploratory investigations. . It is a rigorous approach which provides the researcher with a set of systematic strategies and assumes that when investigating social processes qualitatively from the bottom up there will be an emergence of a working theory about the population from the qualitative data (Willig, 2008, pp. 44).
Discursive psychology and Discourse Analysis
Discourse Analysis: The discursive approach looks to verbal behavior as a more direct means of uncovering underlying cognitions (Harré,1995) rather than assigning a numerical value score or scale to a behavior. This approach takes the view that interpretation and empathy are involved in attempting to understand human behavior. Self-report, from people being studied, can then become a valuable resource in its own right.
What do we do if we want the best of both worlds?
Mixed-Methods Designs - questionnaire) and qualitative (for example, a number of case studies) methods. Mixed-methods research is a flexible approach, where the research design is determined by what we want to find out rather than by any predetermined epistemological position. In mixed-methods research, qualitative or quantitative components can predominate, or both can have equal status.



Midterm  Study Guide
This document will be your study guide for the midterm exam for this course. Remember, the midterm exam will be online starting next week and will be open-book, open note and multiple choice. This does not mean the exam will be easier, in fact many students find such tests more difficult. DO NOT rely on looking up your answers alone, this will take excessive time and the exam is limited to 1 hour and 30 minutes. This document should direct your focus on what to study. If you have any questions, please direct them to the exam-prep Q & A forum.
Terms to Know
Types of Unscientific thinking
Experience-Based errors in thinking
 Scientific Method
Assumptions about behavior in research
 Theory
 Hypothesis
Relationship between theory and data
 Method
Methodology
Insight
Comparative Historical Analysis (Know different types)
Epistemology
 Positivism
Ethnography (Ethnographic Methods)
 Case Study (definition, when it is used, different types)
Midterm Study Guide
Meta analysis
 Ideographic Explanation
Basic vs. Applied Research
Field vs. Lab research (know different types as well)
Validity and reliability in Field vs. Lab research
Quantitative vs. Qualitative research
Statistical Methods (for the midterm, just know basic definition from book and slides) Operationalize
Variable
Independent Variable
 Dependent Variable
 Control Variable
Cross-sectional vs. longitudinal research
Term Paper Assignment

Basic Requirements:
 • 7-9 Pages in length (not including cover page and reference list)
• APA format • Times New Roman, 12 point font with 1 inch margins
• Scholarly Articles Only
• Due by the Sunday March 15th) @ 11:59pm via Turn-it-in online submission portal on WebAccess Overview and instructions:


 You may choose this as one of your two term-paper options. You will be given a series of scholarly articles to choose from within the field of developmental psychology. This project will be an exercise in not only identifying real-world applications of social science research, but in following a social science research methods writing format, namely APA style. Please select one of the articles posted to the “Scholarly Articles in the Social Sciences” section of WebAccess. Select at minimum one article, read it thoroughly and make an argument for or against the conclusions made by the author. You must identify what type of study the authors engaged in (i.e., experimental, quasi-experimental, qualitative, quantitiative, case study, laboratory research, field research etc.) Make sure to support your argument using ONLY scholarly sources such as those posted to WebAccess. Please state what (if anything) you agree with and/or disagreement using information from either one of these articles (or outside scholarly sources if you wish) to support your argument. You MUST do more than simply summarize your article of choice. Avoid using “I” statements as you will be graded on your academic writing style as well as content. If you need help finding additional scholarly articles, please utilize google scholar, email me, or speak to the college library staff for assistance.


According to the lectures, I learned that, in the social sciences there is no best kind of research.  I think researchers probably use several methods in order to conduct research. Empirical, all information is based on observation.  Objectivity, Observations is verified by others.  Systematic, observations are made in a step-by-step fashion.  Controlled, potentially confusing factors are eliminated.  Public, built on previous research, open to critique and replication, building towards theories.
 It dependents what king of research or for what purpose you are researching. For example in social science is the science of people or collections of people, such as groups, firms, societies, or economies, and their individual or collective behaviors. It can be classified into disciplines such s psychology, sociology, and economics.  The society very much is more for the “collective”.  I think, using the scientific method is imperative for any kind of research.
Second week: 
If we were to argue that there is a “best kind of research,” which one would it be? Is there a “best kind of research” at all?
I learned that, in the social sciences there is no best kind of research.  I think researchers probably use several methods in order to conduct research. Empirical, all information is based on observation.  Objectivity, Observations is verified by others.  Systematic, observations are made in a step-by-step fashion.  Controlled, potentially confusing factors are eliminated.  Public, built on previous research, open to critique and replication, building towards theories.
 It dependents what king of research or for what purpose you are researching. For example in social science is the science of people or collections of people, such as groups, firms, societies, or economies, and their individual or collective behaviors. It can be classified into disciplines such s psychology, sociology, and economics.  The society very much is more for the “collective”.  I think, using the scientific method is imperative for any kind of research.

Discussion Question Week 3


When we consider the advantages and disadvantages of laboratory vs. field research, are there any others that come to mind that were not outlined in lecture? Are there some things we can do in the field that we just cannot do in the lab and vise-versa? What are your ideas as researchers-in-training for accounting for the disadvantages of each and what problems might you foresee arising with your idea?

Discussion Question Week 3
1.      When we consider the advantages and disadvantages of laboratory vs. field research, are there any others that come to mind that were not outlined in lecture?
A)     Field Research/Ethnography: Participant observation is based on living among the people under study for a period of time, could be months or maybe years, and gathering data through continuous involvement in their lives and activities. The ethnographer begins systematic observation and keeps notes, in which the significant events of each day are recorded along with informants and interpretations. These demands are met through two major research techniques participant observation and key informant interviewing.  An example would be the one on the video that Maria has been spending several months with Steve a drug user, and the ethical problem come now, the participant do not realize that their behavior is being observed. Obviously (there is no consent) cannot give voluntary informed consent to be involved in the study.  Steve confesses that he is HIV positive and his partner does not know, there is a confidentiality issue.
2.      Are there some things we can do in the field that we just cannot do in the lab and vise-versa?
A)        I learned that clear advantage of laboratory experiments over field experiments is that it is much easier to obtain large amounts of very detailed information from participants in the laboratory. An important reason why laboratory experiments are more artificial than field experiments is because the participants in laboratory experiments are aware that their behavior. One of the advantages of field experiments over laboratory experiments is that the behavior of the participants is often more of their normal behavior. The greatest advantage of field experiments over laboratory experiments is that they are less artificial 
3.     What are your ideas as researchers-in-training for accounting for the disadvantages of each and what problems might you foresee arising with your idea?

A)     I think that the method of investigation used most often by psychologists is the experimental method. Some of the advantages of the experimental method are common to both laboratory and field experiments. I would have to know reliability and validity and field vs. laboratory research. To avoid any confounding variables. These are variables that are manipulated or allowed to vary systematically along with the independent variable. The presence of any confounding variables has grave consequences, because it prevents from being able to interpret our findings.

 I am with you on the fact that, for any research, it is important to consider the source its own bias, its own influences that can alter the results.  Historical research is a tool. A reader or listener must take the bias of the researcher into consideration when reading hearing their research. For example, when Republicans or Democrats are speaking. Most of the times, they are speaking to their political parties advantages, not for/or/to the people. It makes me sad when “John Boehner” says, “We the people of the United States of America”. I want to tell him: You are NOT the only person here; you do not represent my views, or most of my family or friends views. I know for a fact because we had discussed these issues; however he makes people believe that everybody is thinking his way,  (at least that is what I think he is trying to do)

The Case Study Method, Case study is a” Strategy for doing research which involves an empirical investigation of a particular contemporary phenomenon within its real life context using multiple sources of evidence” (Robson, 1993, p. 146). Case studies are in-depth investigations of a single person, group, event or community.  Depending on the case study   it says whether the research is field or laboratory research, we always loose valuable information about individual variation when we try to collect the information of the experiences, emotions, and behaviors into common experiences that we can measure numerically and generalize across a population. If I understand correctly the lecture, we cannot generalize from a Case Study. Must be used three things: Descriptive, Exploratory, and Explanatory.

FIFTH WEEK > what are some of the benefits and negatives of the Case Study method
when compared to other types of research reviewed in the course thus far?
Can you think of some specific examples where the case study method might be preferable?
What is a case study?
A” Strategy for doing research which involves an empirical investigation of a particular contemporary phenomenon within its real life context using multiple sources of evidence” (Robson, 1993, p. 146)
It can be (in case studies) you can go in knowing almost nothing.  Why people do things. These 3 things.
Descriptive,
Exploratory
Explanatory
(focused on process: how was it done? Who is doing what to whom  (evaluating how things work.
Outcome : Does it work?
Documentary, survey.  Collect information
Who won,  case study.
Types of case study:
Individual Case study.
Stanley, The “jack-roller”-Shaw.
One singe case study
Set of individual case study:
Three general practice surgeries compared.
Health service, may be 3 practices of medical practices.
Community Studies:
Family and Kinship in East London, The Azande in the Soudan.
Live with group of people for several years. Using similar techniques for both groups.   Particular organizantion
Socilal Group Studies:
Outsiders –Becker on Marijuana smokers and musicians.
1950’s and 1960’s period, Marijuana smokers and musicians
Studies of organizations and institutions:
Working for Ford - Benyon;  National Front – Fielding
Studies of events, roles and relationships:
Housewife – Oakley; Cuban Missiles Crisis.
How to plan a case study
Conceptual Framework:
Displays the important features of a case study.
Shows relationships between features.
Make assumptions explicit.
Selective.
Iterative.
Based on theory.
Takes account of previous research.
Includes personal orientation.
Includes overlaps and inconsistencies.


Research Questions:
Consistent with conceptual framework
Covers conceptual framework thoroughly
Structured and focused.
Answerable
Forms basis for data collection.

Research Design*
Case Spatial Variation (case one) 
Sampling/replications Strategy
Methods and Instruments
Analysis of Data.
Replication Strategy
Sometimes called sampling strategy) Literal vs. theoretical replications
Literal = more of the same
Theoretical = different, identified according to a theoretical standpoint.
Must be linked to research questions
Determines the extent to which generalization is possible
(N.B. Theoretical not statistical generalization.)
Theoretical Replications
Chose:
actors  > Example. Men and women.  MEP’s from different countries, members of different pressure groups.
Settings: Example  Different companies, different branches of political party, range of local authorities.
Events: Example. Elections, selection meetings, budget group meetings, demonstrations.
Processes: Example. Negotiating new laws, developing media strategies.

Why select a single case?
Critical case (test case)
Theory well developed, Case will confirm or refute theory. Example Festinger et.al. When prophesy fails. A social and psychological study of a modern group that predicted the destruction of the world. New York: Harper and Row
Extreme or unique case
Common in clinical cases, example Fielding – National Front.
Representative or typical case
Capture the circumstances of the everyday, example Lynd and Lynd Middletown study. A study in Modern American Culture. Florida: Harcourt Brace and Company. Published on Oct 24, 2012.

Methods and instruments
Observation
Participant observation
Ethnography
Systematic observation
Interview
Open-ended
Focussed/ semi-strcuted
Structured
Documented/Records:
 example minutes of meetings, patient records, diaries. Etc.

Analysis of Data
Prepare (lots of data)
May start during data collection
How will the data be organized?
What analysis strategy will you use?
Follow theoretical proposition
Develop descriptive framework
Problems of Validity
Unreliable self-report data
Unsubstantiated observations
Post-hoc, unsystematic summaries
Speculation and over generalization
Common pitfalls: token literature review. Premature theorizing. Phase slippage.

A lecture on case study
s as a research strategy taken from a series on research methods and research design given to masters (graduate) students by Graham R Gibbs at the University of Huddersfield. This is part 3 of three, and examines the replication strategy in case studies and whether it should be based on literal or theoretical replication or whether the case study should use only a single case as a test case, extreme case or typical case.



Consider generalizing the results of research done on a small sample to the general population.
I think I need to consider the Type of design chosen:  Questions the conditions under which the findings be generalized deals with the ability to generalize the findings outside the study to other populations and environments. Purpose of Research Design: Provides the plan or blueprint for testing research questions and hypotheses. Involves structure and strategy to maintain control and intervention fidelity. Accuracy:  Accomplished through the theoretical framework and literature review. All aspects of the study systematically and logically follow form the research questions. Time: Is there enough time for completion of the study. Control: Achieved with steps taken by the searcher to hold the conditions of the study uniform and avoid or decrease the effect of intervening, extraneous, or mediating variables of the dependent variable or outcome. Ensures that every subject receiving the intervention of treatment receive the identical intervention or treatment.
what are some of the benefits and negatives of qualitative and quantitative research?
Variables that occur during the study that interfere with or influence the relationship between the independent and dependent variables.
Intervening and mediating variables are processes that occur during the study.
Objectivity can be achieved form a thorough review of the literature and the development of a theoretical framework.
Instrumentation: changes in equipment used to make measurements or changes in observational techniques may cause measurements to vary between participants related to  treatment fidelity.
Controlling Extraneous Variables Using a homogeneous sample
Using consistent data-collection procedures- constancy.  A homogeneous sample is one in which the researcher chooses participants who are alike – for example, participants who belong to the same subculture or have similar characteristics.  Homogeneous sampling can be of particular use for conducting focus groups because individuals are generally more comfortable sharing their thoughts and ideas with other individuals who they perceive to be similar to them. Patton, M. (2001). Qualitative Research & Evaluation Methods. 
when thinking of this,  Could one said to be superior to the other, or are they context specific?
 Consider generalizing the results, according to the lecture, the independent variable is: the variable that the researcher hypothesizes will have an effect on the dependent variable Usually manipulated (experimental study) The independent variable is: manipulated by means of a program, treatment, or intervention done to only one group in the study (experimental group ) The control group gets the standard treatment or no treatment.
The dependent variable is a factor, trait, or condition that can exist in differing amounts or types. Not manipulated and pressured to vary with changes in the independent variable The variable the researcher is interested in explaining. Randomization Each subject in the study has an equal chance of being assigned to the control group or the experimental group.
Assumes that any important intervening, extraneous, or mediating variable will be equally distributed between the groups, minimizing variance and decreasing selection bias.
Testing:  Taking the same test more than once can influence the participant’s responses the next time test is taken.  
(week sixth)
Consider generalizing the results of research done on a small sample to the general population, when thinking of this, what are some of the benefits and negatives of qualitative and quantitative research?  Could one said to be superior to the other, or are they context specific?
I think I need to consider the Type of design chosen:
Questions the conditions under which the findings be generalized
Deals with the ability to generalize the findings outside the study to other populations and environments.
Purpose of Research Design:
Provides the plan or blueprint for testing research questions and hypotheses.
Involves structure and strategy to maintain control and intervention fidelity.
Accuracy:  Accomplished through the theoretical framework and literature review. All aspects of the study systematically and logically follow form the research questions.
Feasibility
 Time: Is there enough time for completion of the study.
Control: Achieved with steps taken by the searcher to hold the conditions of the study uniform and avoid or decrease the effect of intervening, extraneous, or mediating variables of the dependent variable or outcome.
Intervention fidelity: Ensures that every subject receiving the intervention of treatment receive the identical intervention or treatment.
Intervening, Extraneous, Or Mediating Variables.
Variables that occur during the study that interfere with or influence the relationship between the independent and dependent variables.
Intervening and mediating variables are processes that occur during the study.
Objectivity can be achieved form a thorough review of the literature and the development of a theoretical framework.
Instrumentation: changes in equipment used to make measurements or changes in observational techniques may cause measurements to vary between participants related to  treatment fidelity.
Controlling Extraneous Variables
Using a homogeneous sample
Using consistent data-collection procedures- constancy.
Patton, M. (2001). Qualitative Research & Evaluation Methods. 

Homogeneity
Participants in the study are homogeneous or have similar extraneous variables that might affect the dependent variable
Homogeneity of the sample limits generalizability or the potential to apply the results of the study to other populations.
Independent Variable (X)
The independent variable is: the variable that the researcher hypothesizes will have an effect on the dependent variable
Usually manipulated (experimental study)
The independent variable is: manipulated by means of a program, treatment, or intervention done to only one group in the study (experimental group )
The control group gets the standard treatment or no treatment.
Dependent Variable (Y)
The dependent variable is a factor, trait, or condition that can exist in differing amounts or types
Not manipulated and pressured to vary with changes in the independent variable
The variable the researcher is interested in explaining.
Randomization
Each subject in the study has an equal chance of being assigned to the control group or the experimental group.
Assumes that any important intervening, extraneous, or mediating variable will be equally distributed between the groups, minimizing variance and decreasing selection bias.
Internal Validity
Asks whether the independent variable really made the difference or the change in the dependent variable.
Established by ruling out other factors or threats as rival explanations.
Threats to internal validity
History: an event, other than the intervention, that might have an effect on the dependent variable; the event could be either inside or outside the experimental setting.
Testing:  Taking the same test more than once can influence the participant’s responses the next time test is taken.
Threats to Internal Validity:
Mortality: the loss of study subjects
Selection bias : a  partiality in choosing the participants in a study.
Objectivity in Conceptualization of the Research Questions. (week sixth)
Type of design chosen
Accuracy
Feasibility
Control and intervention  fidelity
Validity: internal and external
Objectivity can be achieved form a thorough review of the literature and the development of a theoretical framework.
The literature review should be presented so that the reader can judge the objectivity of the research questions.
Purpose of Research Design
Provides the plan or blueprint for testing research questions and hypotheses.
Involves structure and strategy to maintain control and intervention fidelity.
Accuracy
Accomplished through the theoretical framework and literature review
All aspects of the study systematically and logically follow form the research questions.
Feasibility:
Time: Is there enough time for completion of the study.
Subject availability: Are a sufficient number of eligibility subjects available?
Facility and  equipment availability:  are there necessary equipment and facilities available?
Expense: Is money available for the projected cost?
Experience: is the study based on the researcher’s experience and interest?
Ethics: could subject be harmed?
CONTROL
Control: Achieved with steps taken by the searcher to hold the conditions of the study uniform and avoid or decrease the effect of intervening, extraneous, or mediating variables of the dependent variable or outcome.
Intervention Fidelity
Intervention fidelity: Ensures that every subject receiving the intervention of treatment receive the identical intervention or treatment.
Intervening, Extraneous, Or Mediating Variables.
Variables that occur during the study that interfere with or influence the relationship between the independent and dependent variables.
Intervening and mediating variables are processes that occur during the study.
Extraneous variables are subject, researcher, or environmental characteristics that might influence the dependent variable. “muck up the study!!!!”
Controlling Extraneous Variables
Using a homogeneous sample
Using consistent data-collection procedures- constancy.
Homogeneity
Participants in the study are homogeneous or have similar extraneous variables that might affect the dependent variable
Homogeneity of the sample limits generalizability or the potential to apply the results of the study to other populations.
Independent Variable (X)
The independent variable is: the variable that the researcher hypothesiszes will have an effect on the dependent variable
Usually manipulated (experimental study)
The independent variable is: manipulated by means of a program, treatment, or intervention done to only one group in the study (experimental group )
The control group gets the standard treatment or no treatment.
Dependent Variable (Y)
The dependent variable is a factor, trait, or condition that can exist in differing amounts or types
Not manipulated and pressured to vary with changes in the independent variable
The variable the researcher is interested in explaining.
Randomization
Each subject in the study has an equal chance of being assigned to the control group or the experimental group.
Assumes that any important intervening, extraneous, or mediating variable will be equally distributed between the groups, minimizing variance and decreasing selection bias.
Internal Validity
Asks whether the independent variable really made the difference or the change in the dependent variable.
Established by ruling out other factors or threats as rival explanations.
Threats to internal validity
History: an event, other than the intervention, that might have an effect on the dependent variable; the event could be either inside or outside the experimental setting.
Treats to internal validity
Maturation: Developmental, biological, or psychological processes that operate within an individual over time. These processes are outside the experimental setting.
Testing:  Taking the same test more than once can influence the participant’s responses the next time test is taken.
Instrumentation: changes in equipment used to make measurements or changes in observational techniques may cause measurements to vary between participants related to  treatment fidelity.
Threats to Internal Validity:
Mortality: the loss of study subjects
Selection bias : a  partiality in choosing the participants in a study.
External validity
Questions the conditions under which the findings be generalized.
Deals with the ability to generalize the findings outside the study to other populations and environments.


CRITERIA FOR SCIENTIFIC METHOD:
Empirical: All information is based on observation.
Objectivity: Observations are verified by others.
Systematic: Observations are made in a step-by-step fashion.
Controlled: Potentially confusing factors are eliminated.

Public: Built on previous research, open to critique and replication, building towards theories
Bottom of Form


CRITERIA FOR SCIENTIFIC METHOD:

(UN) SCIENTIFIC THINKING:  
Because someone told us that something is true.
2. Reasoning
A priori method (proposed by Charles Peirce): a person develops a belief by reasoning, listening to others’ reasoning, and drawing on previous intellectual knowledge – not based on experience or direct observation.  So now that we have a firm grounding in some of the basic theories and theorists within psychology, and a basic understanding of the multiple conceptions of personality and how it develops, the logical next step is to explore how we come to these conclusions with regard to models of personality and other psychological concepts. In other words, how do we scientifically ascertain whether these theories hold water, and how we can most accurately quantify and categorize human behavior, while still attempting to allow for the grey area of individual differences?

Well, Psychological Research is defined as the scientific exploration, designed to describe, predict, and control human behavior, and to understand the hard facts of the human mind across diverse psychological and cross-cultural populations.

So the first logical question before engaging in any kind of research is, how do we know something to be true in day to day life?

Well the first way is Authority, because someone told us that something is true. This could be any authority figure, like a professor, or someone in the media. Because someone told us it’s the truth, we often believe it to be so. Obviously, this has issues from a scientific perspective. When engaging in research we can’t rely on what others tell us alone, we need to have hard, replicable data to support it.

The second method is Reasoning. The main method of reasoning is the A priori method (proposed by Charles Peirce): where a person develops a belief by reasoning, listening to others’ reasoning, and drawing on previous intellectual knowledge – not based on experience or direct observation. While this might work when developing opinions in day-to-day life, we can’t say that this gives us hard facts that are generalizable to any population. Opinions derived from this method, however can be said to create a good starting point for research.
(Somewhat) scientific thinking: Experience
Empiricism:

knowing by direct observation or experience.

Subject to errors in thinking! So as we start to move towards scientific thinking we get to our third way of knowing, Experience, or knowing by direct observation. While this is all well and good, it is subject to errors in thinking. Indeed, our perceptions may differ between individuals when encountering the same experience, so we cant say that what we come up with will always be accurate. And, in fact there are several well known types of errors in experience-based conclusions and in psychological research in general.

Experience based errors in thinking Illusory Correlation
Definition:  thinking that one has observed                                     an association between events that
(a) doesn’t exist,

(b) exists but is not as strong as is believed,

or (c) is in the opposite direction from what is believed.
The first one is an Illusory Correlation, or thinking that one has observed an association between events that either:

Experience based errors in thinking: Confirmation Bias –
In psychology and cognitive science, confirmation bias (or confirmatory bias) is a tendency to search for or interpret information in a way that confirms one's preconceptions, leading to statistical errors.
The second error in thinking is Confirmation Bias

Experience based errors in thinking: Availability Heuristic - An availability heuristic is a mental shortcut that relies on immediate examples that come to mind.
As a result, you might judge that those events are more frequent and possible than others and tend to overestimate the probability and likelihood of similar things happening in the future.
 Finally, the last common error in psychological research is the Availability Heuristic is a mental shortcut that relies on immediate examples that come to mind.
 When you are trying to make a decision, a number of related events or situations might immediately spring to the forefront of your thoughts. As a result, you might judge that those events are more frequent and possible than others. You give greater credence to this information and tend to overestimate the probability and likelihood of similar things happening in the future.
The term was first coined in 1973 by psychologists Amos Tversky and Daniel Kahneman. They suggested that the availability heuristic occurs unconsciously and operates under the principle that "if you can think of it, it must be important." Things that come to mind more easily are believed to be far more common and more accurate reflections of the real world.
For example, after seeing several news reports about car thefts, you might make a judgment that vehicle theft is much more common than it really is in your area. This type of availability heuristic can be helpful and important in decision-making. When faced with a choice, we often lack the time or resources to investigate in greater depth. Faced with the need to an immediate decision, the availability heuristic allows people to quickly arrive at a conclusion.
While it can be useful at times, the availability heuristic can lead to problems and errors. Reports of child abductions, airplane accidents, and train derailments often lead people to believe that such events are much more typical than they truly are.
Reference:

So given all of these errors in thinking and understanding, how could we ever hope to accurately observe?


Comparative-historical analysis: A prominent research tradition in the social sciences, especially in political science and sociology. Works within this research tradition use comparative-historical methods, pursue causal explanations, and analyze units of analysis at the meso- or macro-level.

Comparative methods: Diverse methods used in the social sciences that offer insight through cross-case comparison. For this, they com- pare the characteristics of different cases and highlight similarities and differences between them. Comparative methods are usually used to explore causes that are common among a set of cases. They are commonly used in all social scientific disciplines.

Epistemology: A branch of philosophy that considers the possibility of knowledge and understanding. Within the social sciences, epistemological debates commonly focus on the possibility of gaining insight into the causes of social phenomena.

Experimental methods: The most powerful method used in the social sciences, albeit the most difficult to use. It manipulates individuals in a particular way (the treatment) and explores the impact of this treat- ment. It offers powerful insight by controlling the environment, thereby allowing researchers to isolate the impact of the treatment.

Ethnographic methods: A type of social scientific method that gains insight into social relations through participant observation, interviews, and the analysis of art, texts, and oral histories. It is commonly used to analyze culture and is the most common method of anthropology.

Case Study (Within-case methods): A category of methods used in the social sciences that offer insight into the determinants of a particular phenomenon for a particular case. For this, they analyze the processes and characteristics of the case.

Ideographic explanation: Causal explanations that explore the causes of a particular case. Such explanations are not meant to apply to a larger set of cases and commonly focus on the particularities of the case under analysis.

Method: A technique used to analyze data. Commonly, a method is aligned with a particular strategy for gathering data, as particular methods commonly require particular types of data. “Method” is therefore commonly used to refer to strategies for both analyzing and gathering data.
Methodology: A body of practices, procedures, and rules used by researchers to offer insight into the workings of the world.

Insight: Evidence contributing to an understanding of a case or set of cases. Comparative-historical researchers are generally most concerned with causal insight, or insight into causal processes.

Positivism: An epistemological approach that was popular among most of the founding figures of the social sciences. It claims that the scientific method is the best way to gain insight into our world. Within the social sciences, positivism suggests that scientific methods can be used to analyze social relations in order to gain knowledge...
Positivism: An epistemological approach that was popular among most of the founding figures of the social sciences. It claims that the scientific method is the best way to gain insight into our world. Within the social sciences, positivism suggests that scientific methods can be used to analyze social relations in order to gain knowledge. At its extreme, positivism suggests that the analysis of social relations through scientific methods allows researchers to discover laws that govern all social relations. Positivism is therefore linked to nomothetic explanations. Other positivists believe social complexity prevents the discovery of social laws, but they still believe that the scientific method allows researchers to gain insight into the determinants of social phenomena.

Variable: Something that the researcher/experimenter can measure.
Independent Variable: The variable the experimenter has control over, can change in some way to see if it has an effect on other variables in the study.
Dependent Variable: The variable that is measured to see if a change takes place:
Control Variable: The variable that is not manipulated that serves as a comparison group from the other variables in the study. This third variable is used to ensure that the independent variable, when manipulated is actually having an effect on the dependent variable. For example, if a similar change occurs in the control variable as the dependent variable, this indicates that the change may not be the result of the independent variable manipulation and may be a natural change in the variable.
In a experiment the researcher manipulates the independent variable to see if it has an effect on the dependent variable.

Statistical methods: The most common subtype of comparative methods. It operationalizes variables for several cases, compares the cases to explore relationships between the variables, and uses probability theory to estimate causal effects or risks. Within the social sciences, statistics uses natural variation to approximate experimental methods. There are diverse subtypes of statistical methods.

Variable: Something that the researcher/experimenter can measure.
Independent Variable: The variable the experimenter has control over, can change in some way to see if it has an effect on other variables in the study.
Dependent Variable: The variable that is measured to see if a change takes place:
Control Variable: The variable that is not manipulated that serves as a comparison group from the other variables in the study. This third variable is used to ensure that the independent variable, when manipulated is actually having an effect on the dependent variable. For example, if a similar change occurs in the control variable as the dependent variable, this indicates that the change may not be the result of the independent variable manipulation and may be a natural change in the variable.
In a experiment the researcher manipulates the independent variable to see if it has an effect on the dependent variable.

Empirical: All information is based on observation.
 Objectivity: Observations are verified by others.
 Systematic: Observations are made in a step-by-step fashion.
 Controlled: Potentially confusing factors are eliminated.
 Public: Built on previous research, open to critique and replication, building towards theories
Deduction: Reasoning from a set of general statements towards the prediction of some specific event. Based on a theory, one can deduct an event or behavior given particular conditions.
 Hypothesis: Prediction about specific events that is derived from the theory.
 Induction: Logical process of reasoning from specific events to the theory (either confirming or disproving the theory).
 Relationship between theory and data
 Deduction: Reasoning from a set of general statements towards the prediction of some specific event. Based on a theory, one can deduct an event or behavior given particular conditions.
 Hypothesis: Prediction about specific events that is derived from the theory.
 Induction: Logical process of reasoning from specific events to the theory (either confirming or disproving the theory).
Relationship between theory and data
 Deduction: Reasoning from a set of general statements towards the prediction of some specific event. Based on a theory, one can deduct an event or behavior given particular conditions.
 Hypothesis: Prediction about specific events that is derived from the theory.
 Induction: Logical process of reasoning from specific events to the theory (either confirming or disproving the theory).
 Relationship between theory and data
 Deduction: Reasoning from a set of general statements towards the prediction of some specific event. Based on a theory, one can deduct an event or behavior given particular conditions.
 Hypothesis: Prediction about specific events that is derived from the theory.
 Induction: Logical process of reasoning from specific events to the theory (either confirming or disproving the theory).
 Relationship between theory and data
 Deduction: Reasoning from a set of general statements towards the prediction of some specific event. Based on a theory, one can deduct an event or behavior given particular conditions.
 Hypothesis: Prediction about specific events that is derived from the theory.
 Induction: Logical process of reasoning from specific events to the theory (either confirming or disproving the theory).
 Relationship between theory and data
Deduction: Reasoning from a set of general statements towards the prediction of some specific event. Based on a theory, one can deduct an event or behavior given particular conditions.
 Hypothesis: Prediction about specific events that is derived from the theory.
 Induction: Logical process of reasoning from specific events to the theory (either confirming or disproving the theory).
Definition: A way of knowing characterized by the attempt to apply systematic, objective, empirical methods when searching for causes of natural events.
 Probabilistic Statistical determinism: Based on what we have observed, is the likelihood of two events occurring together (whether causal, predictive, or simple relational) greater than chance?
 Objectivity: without bias of the experimenter or participants.
 Data-driven: conclusions are based on the data-- objective information.
Data-driven: conclusions are based on the data-- objective information.
 The scientific method
Question: “History is created by the victor.” Given this statement, do we believe that comparative historical research gives us a good grasp of what actually happened in the past?
Answer: I do not think so. According to the lecture, the  comparative-historical analysis has four main defining elements. Comparative-historical analysis is also defined by epistemology. Specifically, comparative-historical works pursue social scientific insight and therefore accept the possibility of gaining insight through comparative-historical plus other  methods. Finally, the unit of analysis is a defining element, with comparative-historical analysis focusing on more aggregate social units.  
Four main defining elements:
 Historical Events Research focuses on one short historical period (1 case, 1 time period)
 Historical Process Research –traces a sequence of events over a number of years (1 case, many time periods)
Cross-sectional Comparative Research -- comparing data from one time period between two or more nations (many cases, 1 time period)
 Comparative Historical Research – longitudinal comparative research (many cases) over a prolonged period of time.  Comparative and Historical Research by number of cases and length of time studied. 
4. Scientific Method:
Definition: A way of knowing characterized by the attempt to apply systematic, objective, empirical methods when searching for causes of natural events.
Probabilistic Statistical determinism: Based on what we have observed, is the likelihood of two events occurring together (whether causal, predictive, or simple relational) greater than chance?
Objectivity: without bias of the experimenter or participants.
Data-driven: conclusions are based on the data-- objective information.
Well, we have what is known as the scientific method:
The Scientific Method Is a way of knowing characterized by the attempt to apply systematic, objective, empirical methods when searching for causes of natural events.
It utilizes probabilistic statistical determinism, asking the data given what we have observed, is the likelihood of two events occurring together (whether causal, predictive, or simple relational) greater than chance?
It utilizes, or attempts to utilize Objectivity: producing or participating in research without bias of the experimenter or participants.
And finally, and most importantly, its Data-driven: conclusions are based on the data-- objective information, and mathematical facts.

Steps of The Scientific Method

Ask a Question
Do Background Research
Construct a Hypothesis
Test Your Hypothesis by Doing an Experiment
Analyze Your Data and Draw a Conclusion
Communicate Your Results

Scientific Thinking in Research
CRITERIA FOR SCIENTIFIC METHOD:
Empirical: All information is based on observation.
Objectivity: Observations are verified by others.
Systematic: Observations are made in a step-by-step fashion.
Controlled: Potentially confusing factors are eliminated.
Public: Built on previous research, open to critique and replication, building towards theories

Lawful: Every event can be understood as a sequence of natural causes and effects.
Determinism: Events and behaviors have causes.
Discoverability: Through systematic observation, we can discover causes – and work towards more certain and comprehensive explanations through repeated discoveries. 

Now all of this couldnt work if we didnt make some assumptions about behavior when engaging in scientific research.
We assume, as experimental psychologists that:
Lawful: Every event can be understood as a sequence of natural causes and effects.
As psychological researchers we believe in Determinism: That events and behaviors have causes.  And we also believe in Discoverability: Through systematic observation, we can discover causes – and work towards more certain and comprehensive explanations through repeated discoveries. 

Research Question
Definition of a theory: A set of logically consistent statements about some psychological phenomenon that best summarizes existing empirical knowledge of the phenomenon organizes this knowledge in the form of precise statements of the relationship among variables provides a tentative explanation for the phenomenon serves as a basis for making predictions about behavior.
So we have our method, we have our assumptions, so how do we actually do research?
Relationship between theory and data
Deduction: Reasoning from a set of general statements towards the prediction of some specific event.  Based on a theory, one can deduct an event or behavior given particular conditions.
Hypothesis: Prediction about specific events that is derived from the theory.
Induction:  Logical process of reasoning from specific events to the theory (either confirming or disproving the theory).
From theory to actual research So what is the Relationship between theory and data you might ask?

Well the first relationship is Deduction: Reasoning from a set of general statements towards the prediction of some specific event.  Based on a theory, one can deduct an event or behavior given particular conditions.

The second is that we can form a Hypothesis: Prediction about specific events that is derived from the theory.

And finally, the last one is Induction:  Logical process of reasoning from specific events to the theory (either confirming or disproving the theory).


The various methodologies of social science research, but before we do so we will review a bit some of the core aspects of the research process.
So, in common with other sciences, psychology and sociology are both concerned with theories and with data.
A theory provides a general explanation or account of certain findings or data. It also generates a number of experimental hypotheses, which are predictions or expectations about behavior based on the theory. For example, someone might propose a theory in which it is
argued that some people are more hostile than others. This theory could be used to produce various hypotheses or predictions, such as the following: hostile people will express anger more often than non-hostile ones; hostile people will react more strongly than non-hostile
ones to frustrating situations; hostile people will be more sarcastic than non-hostile people.
Psychologists spend a lot of their time collecting data in the form of measures of behavior. Data are collected in order to test various hypotheses. Most people assume that this data collection involves proper or true experiments carried out under laboratory conditions,and it is true that literally millions of laboratory experiments have been carried out in psychology. However, psychologists make use of several methods of investigation,each of which has provided useful information about human behavior.
As you read through the various methods of investigation in your textbook and listen to them in our lectures, it is natural to wonder which methods are the best and the worst as was the topic in our last discussion question. In some ways, it may be more useful to compare the methods used by psychologists to the clubs used by the golf professional. The driver is not a better or worse club than the putter, it is simply used for a different purpose.
In similar fashion, each method of investigation used by psychologists is very useful for testing some hypotheses, but is of little or no use for testing other hypotheses.
However, as we will discuss further the experimental method provides the best way of being able to make inferences about cause and effect.

EXPERIMENTAL METHOD
The method of investigation used most often by psychologists is the experimental method. In order to understand what is involved in the experimental method, we will consider a concrete example.
Dependent and independent variables
Suppose that a psychologist wants to test the experimental hypothesis that loud noise will have a disruptive effect on the performance of a task. As with most hypotheses, this one refers to a dependent variable, which is some aspect of behavior that is going to be measured.
In this case, the dependent variable is some measure of task performance.
Most experimental hypotheses state that the dependent variable will be affected systematically by some specified factor, which is known as the independent variable. In the case we are considering, the independent variable is the intensity of noise. More generally, the independent variable is some aspect of the experimental situation that is manipulated by the experimenter.
We come now to the most important principle involved in the use of the experimental method: the independent variable of interest is manipulated, but all other variables are controlled. It is assumed that, with all other variables controlled, the one and only variable that is being manipulated is the cause of any subsequent change in the dependent variable. In terms of our example, we might expose one group of participants to very intense noise, and a second group to mild noise. What would we need to do to ensure that any difference in the performance of the two groups was due to the noise rather than any other factor? We would control all other aspects of the situation by, for example, always using the same room for the experiment, keeping the temperature the same, and having the same lighting.
Confounding variables
Another way of expressing the essence of the experimental method is that it is of fundamental importance to avoid any confounding variables. These are variables that are manipulated or allowed to vary systematically along with the independent variable. The presence of any confounding variables has grave consequences, because it prevents us from being able to interpret our findings. For example, suppose that the participants exposed to intense noise performed the task in poor lighting conditions so that they could hardly see what they were doing, whereas those exposed to mild noise enjoyed good lighting
conditions. If the former group performed much worse than the latter group, we would not know whether this was due to the intense noise, the poor lighting, or some combination of the two.
You might think that it would be easy to ensure that there were no confounding variables in an experiment. However, there are many well-known published experiments containing confounding variables. Consider, for example, a study by Jenkins and Dallenbach (1924). They gave a learning task to a group of participants in the morning, and then tested their memory for the material later in the day. The same learning task was given to a second group of participants in the evening, and their memory was tested the following morning after a night’s sleep.
What did Jenkins and Dallenbach find? Memory performance was much higher for the second group than for the first. They argued that this was due to there being less interference with memory when people are asleep than when they are awake. Can you see the flaw in this argument? The two groups learned the material at different times of day, and so time of day was a confounding variable. Hockey, Davies, and Gray (1972) discovered many years later that the time of day at which learning occurs is much more important than whether or not the participants sleep between learning and the memory test.
Proper use of the experimental method requires careful consideration of the ways in which the participants are allocated to the various conditions. A detailed account is given in the Research methods: Design of investigations chapter, so we will focus here only on experiments in which there are different participants in each condition.
Suppose that the participants exposed to intense noise were on average much less intelligent than those exposed to mild noise. We would then be unable to tell whether poorer performance by the former participants was due to the intense noise or to their low intelligence. The main way of guarding against this possibility is by means of randomization, in which the participants are allocated at random to the two conditions.
Numerous studies are carried out using students as participants. This raises the issue of whether students are representative of society as a whole. For example, it is possible that students would be less distracted than other people by intense noise because they are used to studying over long periods of time in conditions that can be noisy, such as halls of residence.
The experimental method is used mainly in laboratory experiments. However, it is also used in field experiments, which are experiments carried out in natural settings such as in the street, in a school, or at work. Some of the advantages of the experimental method are common to both laboratory and field experiments, whereas other advantages and limitations are specific to one type of experiment. We will consider the common advantages next, with more specific advantages and limitations being discussed after that.
We can see why findings from studies based on the experimental method do not necessarily establish causality from the following imaginary example. An experiment on malaria is carried out in a hot country. Half of the participants sleep in bedrooms with the windows open, and the other half sleep in bedrooms with the windows closed. Those sleeping in bedrooms with the windows open are found to be more likely to catch malaria.
It would obviously be wrong to argue that having the window open caused malaria.
Having the window open or closed is relevant to catching the disease, but it tells us nothing directly about the major causal factor in malaria (infected mosquitoes).
Replication The other major advantage of the experimental method concerns what is known as replication. If an experiment has been conducted in a carefully controlled way, it should be possible for other researchers to repeat or replicate the findings obtained from that experiment. There have been numerous failures to replicate using the experimental method, but the essential point is that the chances of replication are greater when the experimental method is used than when it is not. In natural sciences, most experiments have been undertaken in carefully controlled laboratory conditions Seeking laws of physics, etc.
Inanimate objects are unlikely to react differently in controlled and uncontrolled conditions The main goal is to avoid contaminating the experiment via the introduction of third variables Far more exacting and finer measurement is possible under controlled conditions. Closely controlled social science experiments do have the advantage of limiting the impact of third variables, but the unnatural situation presents problems for applying the findings outside the laboratory “Artificiality” of laboratory setting Laboratory and field experiments both involve use of the experimental method, but they
differ in that field experiments are carried out in more natural settings. As an example of a field experiment, let us consider a study by Shotland and Straw (1976; see PIP, p. 675).
They arranged for a man and a woman to stage an argument and a fight fairly close to a number of bystanders. In one condition, the woman screamed, “I don’t know you”. In a second condition, she screamed, “I don’t know why I ever married you!”. When he bystanders thought the fight involved strangers, 65% of them intervened, against only 19% when they thought it involved a married couple. Thus, people were less likely to lend a helping hand when it was a “lovers’ quarrel” than when it was not. The bystanders were convinced that the fight was genuine, as was shown by the fact that 30% of the women were so alarmed that they shut the doors of their rooms, turned off the lights, and locked their doors.
The greatest advantage of laboratory experiments over field experiments is that it is generally easier to eliminate confounding variables in the laboratory than in the field. The experimenter is unlikely to be able to control every aspect of a natural situation.
Another clear advantage of laboratory experiments over field experiments is that it is much easier to obtain large amounts of very detailed information from participants in the laboratory. For example, it is hard to see how information about participants’ physiological activity or speed of performing a range of complex cognitive tasks could be obtained in a field experiment carried out in a natural setting. There are two main reasons why field experiments are limited in this way. First, it is not generally possible to introduce bulky equipment into a natural setting. Second, the participants in a field experiment are likely to realize they are taking part in an experiment if attempts are made to obtain a lot of information from them.
One of the advantages of field experiments over laboratory experiments is that the behavior of the participants is often more typical of their normal behavior. However, the greatest advantage of field experiments over laboratory experiments is that they are less artificial. An important reason why laboratory experiments are more artificial than field experiments is because the participants in laboratory experiments are aware that their behavior is being observed. As Silverman (1977) pointed out, “Virtually the only condition in which a subject [participant] in a psychological study will not behave as a subject [participant] is if he does not know he is in one.” One consequence of being observed is that the participants try to work out the experimenter’s hypothesis, and then act accordingly.
In this connection, Orne (1962) emphasized the importance of demand characteristics, which are “the totality of cues which convey an experimental hypothesis to the subjects.”
Orne found that the participants in one of his studies were willing to spend several hours adding numbers on random number sheets and then tearing up each completed sheet.
Presumably the participants interpreted the experiment as a test of endurance, and this motivated them to keep going.
Another consequence of the participants in laboratory experiments knowing they arebeing observed is evaluation apprehension. Rosenberg (1965) defined this as “an active anxiety-toned concern that he [the participant] win a positive evaluation from the experimenter or at least that he provide no grounds for a negative one.”
Sigall et al. (1970) contrasted the effects of demand characteristics and evaluation apprehension on the task of copying telephone numbers. The experimenter told participants doing the task for the second time that he expected them to perform it at a rate that was
actually slower than their previous performance. Adherence to the demand characteristics would have led to slow performance, whereas evaluation apprehension and the need to be capable would have produced fast times. The participants actually performed more quickly than they had done before, indicating the greater importance of evaluation apprehension.
This conclusion was strengthened by the findings from a second condition, in which the experimenter not only said that he expected the participants to perform at a slower rate, but also told them that those who rush are probably obsessive-compulsive. The participants in this condition performed the task slowly, because they wanted to be evaluated positively.
Another way in which laboratory experiments tend to be more artificial than field experiments was identified by Wachtel (1973). He used the term implacable experimenter to describe the typical laboratory situation, in which the experimenter’s behavior (e.g. instructions) affects the participant’s behavior, but the participant’s
behavior does not influence the experimenter’s behavior. There are two serious problems with experiments using an implacable or unyielding experimenter. First, because the situation (including the experimenter) is allowed to influence the participant but the participant isn’t allowed to affect the situation, it is likely that the effects of situations on our behavior are overestimated. Second, because much of the richness of the dynamic interactions between individual and situation has been omitted, there is a real danger that seriously oversimplified accounts of human behavior will emerge. What we will do here is to discuss a few ethical issues that are of special relevance to laboratory or field experiments. So far as laboratory experiments are concerned, there is a danger that the participants will be willing to behave in a laboratory in ways they would not behave elsewhere. For example, Milgram (1974) found in his work on obedience to authority that 65% of his participants were prepared to give very intense electric shocks to someone else when the experiment took place in a laboratory at Yale University. In contrast, the figure was only 48% when the same study was carried out in a run-down office building. Thus, participants are often willing to do what they would not normally do in the setting of a prestigious laboratory.
Another ethical issue that applies especially to laboratory experiments concerns the participant’s right to withdraw from the experiment at any time. It is general practice to inform participants of this right at the start of the experiment. However, participants may feel reluctant to exercise this right if they think it will cause serious disruption to the experimenter’s research.
So far as field experiments are concerned, the main ethical issue relates to the principle of voluntary informed consent, which is regarded as central to ethical human research. By their very nature, most field experiments do not lend themselves to obtaining informed consent from the participants. For example, the study by Shotland and Straw (1976)
would have been rendered almost meaningless if the participants had been asked beforehand to give their consent to witnessing a staged quarrel! The participants in that study could reasonably have complained about being exposed to a violent quarrel.
Another ethical issue with field experiments is that it is not possible in most field experiments to tell the participants that they have the right to withdraw at any time without offering a reason.
Human behavior is sensitive to the environment within which it occurs People act differently in a laboratory than in the natural world Several characteristics of laboratories are thought to influence behavior The very third variables controlled in the lab may be the ones that determine behavior in the real world So, findings from laboratory experiments may only be valid for laboratory environments Artificiality How much does it matter that laboratory experiments are artificial? As Coolican (1998) pointed out, “In scientific investigation, it is often necessary to create artificial circumstances in order to isolate a hypothesized effect.” If we are interested in studying basic cognitive processes such as those involved in perception or attention, then the artificiality
of the laboratory is unlikely to affect the results. On the other hand, if we are interested in studying social behavior, then the issue of artificiality does matter. For example, Zegoib et al. (1975) found that mothers behaved in a warmer and more patient way with their children when they knew they were being observed than when they did not.
Carlsmith, Ellsworth, and Aronson (1976) drew a distinction between mundane realism and experimental realism. Mundane realism refers to experiments in which the situation is set up to resemble situations often found in everyday life. In contrast, experimental realism refers to experiments in which the situation may be rather artificial, but is sufficiently interesting to produce full involvement from the participants. Milgram’s (1974) research on obedience to authority is a good example of experimental realism (see PIP, Chapter 20).
The key point is that experimental realism may be more important than mundane realism in producing findings that generalize to real-life situations.
New commercials are tested in controlled conditions Eye tracking
Liking for commercials Influence on purchase interest May try to provide less artificial conditions for study Simulated living room Commercials that test high in lab experiments often do not work very well when used in real marketing campaigns Researchers want to retain some of the advantages of the experiment: Ability to manipulate/introduce the independent variable and to control how much of it is presented Time order—which comes first
While sacrificing some of their ability to control third variables
The goal is to improve our ability to generalize our findings to the real world QUASI-EXPERIMENTS “True” experiments based on the experimental method provide the best way of being able to draw causal inferences with confidence. However, it is often the case that there are practical or ethical reasons why it is simply not possible to carry out a true experiment. In such circumstances, investigators often carry out what is known as a quasi-experiment. Quasi-experimental designs “resemble experiments but are weak on some of the characteristics” (Raulin & Graziano, 1994). There are two main ways in which quasi-experiments tend to fall short of being true experiments. First, the manipulation of the independent variable is often not under the control of the experimenter. Second, it is usually not possible to allocate the participants randomly to groups.
There are numerous hypotheses in psychology that can only be studied by means of quasi-experiments rather than true experiments. For example, suppose that we are interested in studying the effects of divorce on young children. We could do this by comparing children whose parents had divorced with those whose parents were still married. There would, of course, be no possibility of allocating children at random to the divorced or non-divorced parent groups! Studies in which pre-existing groups are compared often qualify as quasi-experiments. Examples of such quasi-experiments would be comparing the learning performance of males and females, or comparing the social behavior of introverted and extraverted individuals.
Natural experiments The natural experiment is a type of quasi-experiment in which a researcher makes use of some naturally occurring event for research purposes. An example of a natural experiment is a study by Williams (1986) on the effects of television on aggressive behavior in Canadian children aged between 6 and 11 years. Three communities were compared: one in which television had just been introduced, one in which there was only one television
channel, and one in which there were several channels. The children in the first community showed a significant increase in verbal and physical aggression during the first two years after television was introduced, whereas those in the other two communities did not.
This was not a true experiment, because the children were not allocated randomly to the three conditions or communities.
Adams and Adams (1984) carried out a natural experiment following the eruption of the Mount St Helens volcano in 1980. As the volcanic eruption had been predicted, they were able to assess the inhabitants of the small town of Othello before and after it happened. There was a 50% increase in mental health appointments, a 198% increase in stress-aggravated illness, and a 235% increase in diagnoses of mental illness.
Advantages and limitations What are the advantages of natural experiments? The main one is that the participants in natural experiments are often not aware that they are taking part in an experiment, even though they are likely to know that their behavior is being observed. Another advantage of natural experiments is that they allow us to study the effects on behavior of independent variables that it would be unethical for the experimenter to manipulate.
For example, Adams and Adams (1984) were interested in observing the effects of a major stressor on physical and mental illness. No ethical committee would have allowed them to expose their participants deliberately to stressors that might cause mental illness, but they were able to take advantage of a natural disaster to conduct a natural experiment.
What are the limitations of natural experiments? The greatest limitation occurs because the participants have not been assigned at random to conditions. As a result, observed differences in behavior between groups may be due to differences in the types of participants in the groups rather than to the effects of the independent variable. Consider, for example, the study by Williams (1986) on television and aggression. The children in the community that had just been exposed to television might have been naturally more aggressive than the children in the other two communities. However, the children in the three communities did not differ in their level of aggression at the start of the study.
It is usually possible to check whether the participants in the various conditions are comparable. For example, they can be compared with respect to variables such as age, sex, socio-economic status, and so on. If the groups do differ significantly in some respects irrelevant to the independent variable, then this greatly complicates the task of interpreting the findings of a natural experiment.
The other major limitation of natural experiments involves the independent variable.
In some natural experiments, it is hard to know exactly what aspects of the independent variable have caused any effects on behavior. For example, there is no doubt that the eruption of Mount St Helens was a major stressor. It caused stress in part because of the possibility that it might erupt again and produce more physical devastation. However, social factors were also probably involved. If people in Othello observed that one of their neighbors was highly anxious because of the eruption, this may have heightened their level of anxiety.
Ethical issues It can be argued that there are fewer ethical issues with natural experiments than with many other kinds of research. The reason is that the experimenter is not responsible for the fact that the participants have been exposed to the independent variable. However, natural experiments can raise various ethical issues. First, there can be the issue of informed voluntary consent, in view of the fact that the participants are often not aware
that they are taking part in an experiment. Second, experimenters carrying out natural experiments need to be sensitive to the situation in which the participants find themselves. People who have been exposed to a natural disaster such as a volcanic eruption may resent it if experimenters start asking them detailed questions about their mental health or psychological
well-being.
Naturalistic observation involves methods designed to examine behavior without the experimenter interfering with it in any way. This approach was originally developed by the ethologists such as Lorenz and Tinbergen. They studied animals in their natural habitat rather than in the laboratory, and discovered much about their behavior.
“A prominent example of field research, in particular naturalistic observation is that of Margaret Mead. Cultural anthropologist and writer Margaret Meade (1901-1978) was born in Philadelphia and graduated from Barnard College in 1923. Appointed assistant curator of ethnology at the American Museum of Natural History in 1926, she embarked on two dozen trips to the South Pacific to study primitive cultures. In her resulting books such as Coming of Age in Samoa (1928), Mead formulated her ideas about the powerful effects of social convention on behavior. Named a professor of anthropology at Columbia University in 1954, Mead continued to advocate for the relaxation of traditional gender and sexual conventions through her lecturing and writing.” (http://www.history.com/topics/womens-history/margaret-mead Another example of the use of naturalistic observation in human research is the work of Brown, Fraser, and Bellugi (1964). They studied the language development of three children (Adam, Eve, and Sarah) by visiting them at home about 35 times a year.
One of the key requirements of the method of naturalistic observation is to avoid intrusion. Dane (1994, p. 1149) defined this as “anything that lessens the participants’ perception of an event as natural.” There are various ways in which intrusion can occur.
For example, there will be intrusion if observations are made in an environment that the participants regard as a research setting. There will also be intrusion if the participants are aware that they are being observed. In many studies, the experimenter is in the same room as the participants, and so they are almost certain to realize they are being observed. When this is the case, the experimenter may try to become a familiar and predictable part of the situation before any observations are recorded. 
The participants in naturalistic observation often display a wide range of verbal and non-verbal behavior. How can observers avoid being overloaded in their attempts to record this behavior? One approach is to focus only on actions or events that are of particular interest to the researcher; this is known as event sampling. Another approach is known as time sampling, in which observations are only made during specified time periods (e.g. the first 10 minutes of each hour). A third approach is point sampling, in which one individual is observed in order to categorize their current behavior, after which a second individual is observed.
In considering the data obtained from naturalistic observation, it is important to distinguish between recording and interpretation or coding. For example, an observer may record that the participant has moved forwards, and interpret that movement.
What are the advantages of naturalistic observation? First, if the participants are unaware that they are being observed, then it provides a way of observing people behaving naturally.
When this happens, there are no problems from demand characteristics, evaluation apprehension, the implacable experimenter, and so on. Second, many studies based on naturalistic observation provide richer and fuller information than typical laboratory experiments. For example, participants’ behavior may be observed in a range of different social contexts rather than on their own in the laboratory. Third, it is sometimes possible to use naturalistic observation when other methods cannot be used. For example, the participants may be unwilling to be interviewed or to complete a questionnaire. In the case of participants being observed at work, it may be impossible to obtain permission to disrupt their work in order to carry out an experiment.
What are the limitations of naturalistic observation? These are some of the major ones: 
The experimenter has essentially no control over the situation; this can make it very hard or impossible to decide what caused the participants to behave as they did.
● The participants are often aware that they are being observed, with the result that their behavior is not natural.
● There can be problems of reliability with the observational measures taken, because of bias on the part of the observer or because the categories into which behavior is coded are imprecise. Attempts to produce good reliability often involve the use of very precise but narrow categories, leading to much of the participants’ behavior simply being ignored. Reliability can be assessed by correlating the observational records of two different observers. This produces a measure of inter-rater reliability.
● The fact that observations are typically interpreted or coded prior to analysis can cause problems with the validity of measurement. For example, it may be assumed invalidly  that all instances of one child striking another child represent aggressive acts, when
in fact many of them are only playful gestures. Thus, great care needs to be taken in  operationalization, which is a procedure in which a variable (e.g. aggressive act) is defined by the operations taken to measure it.
● There are often problems of replication with studies of naturalistic observation. For example, the observed behavior of children in a school may depend in part on the fact that most of the teachers are very lenient and fail to impose discipline. The findings might be very different at another school in which the teachers are strict. Ethical issues Naturalistic observation poses ethical problems if the participants do not realize that their behavior is being observed. In those circumstances, they obviously cannot give their voluntary informed consent to be involved in the study. There can also be problems about confidentiality. Suppose, for example, that naturalistic observation takes place in a particular school, and the published results indicate that many of the children are badly behaved. Even if the name of the school is not mentioned in the report, many people reading it will probably be able to identify the school because they know that the researchers made detailed observations there.
A variant of field research that attempts to resolve some of the issues of naturalistic observation is participant observation. In participant observation the observer participates in ongoing activities and records observations. Participant observation extends beyond naturalistic observation because the observer is a "player" in the action. The technique is used in many studies in Anthropology and Sociology. Often the researcher actually takes on the role being studied; for example, living in a commune, becoming a firefighter, enrolling in flight training school, working in a mental hospital (or passing as a patient), being a cocktail waitress, living among the mushroom hunters of the northwest, or joining a cult. A related approach is ethnography – the study of particular people and places. These need not be exotic locations. Ethnography, sometimes referred to as field work or qualitative sociology. It is a more of an approach than a single research method in that it generally combines several research methods including interviews, observation, and physical trace measures. Good ethnography truly captures a sense of the place and peoples studied. There are obvious trade-offs in participant observation research. The researcher is able to get an "insider" viewpoint and the information may be much more rich than that obtained through systematic observation. It has probably already occurred to you that there are potential problems. Consider the two sources of error in systematic research: bias and reactivity. These difficulties are magnified in participant observation. Events are interpreted through the single observer's eyes. The technique generally involves taking extensive notes and writing down one's impressions. Clearly one's own views can come in to play. There is the problem of "going native" which means becoming so involved with and sympathetic to the group of people being studied, that objectivity is lost. Because the observer is a participant in the activities and events being observed, it is easy to influence other people's behavior, thereby raising the problem of reactivity -- influencing what is being observed. Participant observation and ethnography are probably the most stressful research methods for the researcher. People with whom one is interacting may make unreasonable demands. As a participant, one may observe illegal behavior. Ethical dilemmas are commonplace in participant observation. Does one "rat" on fellow employees? Or keep quiet and perhaps risk public safety?
Reliabiity and validity in qualitative research. Individual field notes are not reliable. They lack independent confirmation. Checks on the observer's accounts may be available through other methods or from the work of other researchers. Use of additional observers will increase reliability. Because of their detail, the observations of a participant observer may be more valid than those done in a more systematic but less in-depth way. Validity can be increased by checking with other available data, for example, if the observers claims the group being studied does not engage in any illegal behaviors, yet many of the group members have been arrested, there would be reason to question the validity of the observation. Observation deals with actions and behavior. If you want to find out what people do, you should observe them. If you want to find out what they think (e.g., attitudes, beliefs, expectations, or knowledge), you should ask them directly. Although there are exceptions, observation is generally the best method for studying natural behavior, while interviews and questionnaires are more appropriate for exploring opinions and beliefs. Reliability is always a problem in observation. For systematic observation, the use of two independent observers is recommended during the early stages of the study. No matter how simple and straightforward the behavior being studied, it still is wise to check on reliability. If two independent observers cannot agree on what they see, then the conclusions of the study are in doubt. Participant observation has a different set of strengths and limitations. Its strength is in the richness of the description. Its weakness is its dependence upon the experience of the participant observer. Reliability is a major problem. It is rare to have independent observations of the same events. The method is subject to the biases of the observer. That can be counteracted by using multiple methods to gather data. For example, interviews can corroborate information gained through observation, trace measures, or other sources of data.(Sommer, 2014) 
Now when we look at field research lets give a basic rundown of some of the problems that we routinely encounter with any time of research of this kind:
There is Greatly reduced ability to prevent third variable contamination of results. This refers to that issue of confounding variables.
History effects are also an issue of concern, that is the case for one group of individuals observed or interacted with at one time might not be the same as another group influenced by the generational influences of their group. Furthermore, the experiences that they have over time might change the way they would respond if they were observed at a later date. Think about it this way, if a researcher was trying to gain an understanding of individuals in North America by looking at college students, could we say that their experiences are exactly the same as those same individuals 20 years later? Perhaps not. Individuals who are no longer in school have different life experiences and stressors than those who do, so therefore there are different life demands changing the ways in which these individuals look to an outsider.
There can also be cycles of behavior. These same individuals in school will look a lot different in their actions and behaviors when in school and when on summer break, therefore to generalize their behavior is very difficult when not taking into account the different cycles these individuals experience.
What can the field researcher do?
Measure/monitor likely alternative explanatory variables. Question subjects about sources of influence Multiple manipulations of the independent variable over time Multiple measurement waves Multiple dependent measures  but no matter what we do, it is important to note that despite some of the major advantages of this or indeed any time of research, none will be effect Expense Trade off between extensive and intensive study
-Budget constraints on number of sites, etc.
Access/permission Some research may present concerns to authorities, citizens, etc.
Gain authorization/support prior to entering the field Maintain good relationships with community leaders, etc. throughout the intervention/research Sometimes unusual or unique events occur Service disruptions Political campaigns School  shooting Institution of Wi-Fi in an area Because most such events are unplanned, the ability to prepare for them is limited May keep a research group, materials and resources ready for certain types of events. Must engage in ‘firehouse research’ gathering as much data as possible in a short time Inefficient, and may miss important data However, real-world events, etc. may provide very valuable data—may have both internal and external validity  When understanding the distinction between laboratory and field research and decide which one to choose. To do this we much clarify the tradeoffs:
  Reliability -  Reliability is the degree to which the findings from a study produce stable and consistent results. For example, if we looked at the same group of people in the exact same way at a different time, would be draw the same conclusions?
    Validity - Validity refers to how well does the type of research we engage in measures what it is purported to measure. (e.g., when we try to understand the behaviors of North American people in general, are college students a good group to look at and generalize?)
Internal Validity - One of the keys to understanding internal validity (IV) is the recognition that when it is associated with experimental research it refers both to how well the study was run (research design, operational definitions used, how variables were measured, what was/wasn't measured, etc.), and how confidently one can conclude that the change in the dependent variable was produced solely by the independent variable and not extraneous ones. In group experimental research, IV answers the question, "Was it really the treatment that caused the difference between the means/variances of the subjects in the control and experimental groups?" Similarly, in single-subject research (e.g., ABAB or multiple baseline), IV attempts to answer the question, "Do I really believe that it was my treatment that caused a change in the subject's behavior, or could it have been a result of some other factor?" In descriptive studies (correlational, etc.) internal validity refers only to the accuracy/quality of the study (e.g., how well the study was run-see beginning of this paragraph).

   External Validity - The extent to which a study's results (regardless of whether the study is descriptive or experimental) can be generalized/applied to other people or settings reflects its external validity. Typically, group research employing randomization will initially possess higher external validity than will studies (e.g., case studies and single-subject experimental research) that do not use random selection/assignment. Internal Validity
   External Validity Reliability and Validity and Field Vs. Laboratory Research Field Research: High internal validity, low external validity, Low reliability.

Laboratory Research: High external validity, low internal validity, High reliability.

Comparative- Historical methods
Since the rise of the social sciences, researchers have used comparative- historical methods to expand insight into diverse social phenomena and, in so doing, have made great contributions to our understanding of the social world.
Comparative-Historical Analysis
Mahoney and Rueschemeyer (2003) refer to it as comparative-historical analysis in recognition of the tradition’s growing multidisciplinary character. In addition to sociology, comparative-historical analysis is quite prominent in political science and is present—albeit much more marginally—in history, economics, and anthropology.
4 types of comparative-historical research
• Historical Events Research –focuses on one short historical period (1 case, 1 time period)
• Historical Process Research –traces a sequence of events over a number of years (1 case, many time periods)
• Cross-sectional Comparative Research comparing data from one time period between two or more nations (many cases, 1 time period)
• Comparative Historical Research – longitudinal comparative research (many cases)
Comparative and Historical Research by number of cases and length of time studied
Comparative and Historical Research by number of cases and length of time studied

How do we understand Comparative Historical Research?
As the Venn diagram in this Figure depicts, comparative-historical analysis has four main defining elements. Two are methodological, as works within the research tradition employ both within-case methods and comparative methods. Comparative-historical analysis is also defined by epistemology. Specifically, comparative-historical works pursue social scientific insight and therefore accept the possibility of gaining insight through comparative-historical and other methods. Finally, the unit of analysis is a defining element, with comparative-historical analysis focusing on more aggregate social units.
 Historical Methods: Historical methods, also known as historiography, are the most common analytic techniques used in the discipline of history. They are generally used to explore either what happened at a particular time and place or what the characteristics of a phenomenon were like at a particular time and place. Similar to ethnographic methods, methodological discussions of historiography focus on both data collection and data analysis. The first includes guidelines for finding historical data, as this is a major concern of historians. The second component consists of guide- lines for interpreting and presenting data. These guidelines generally describe how to judge the validity of historical data but rarely discuss how the data is analyzed once its validity has been assessed. This overlooked  element is central to the historical method and involves piecing together the evidence to make a conclusion about the research question.
Researchers can use historical methods to analyze diverse phenomena. For instance, a researcher might investigate the processes leading to Truman’s decision to drop nuclear bombs on Japan. For this, the researcher might analyze biographies, diaries, and correspondence between Truman and other officials involved in deciding to use nuclear weapons. They might also interview officials or close confidants of deceased officials. In gathering this data, the researcher must consider the source of the data and its apparent validity and accuracy. Then, the researcher must analyze the collection of data to assess the influences on Truman, his state of mind, and the sequence of events leading up to his decision to use nuclear weapons. Alternatively, researchers interested in the structure of families in the English countryside during the mid-nineteenth century would rely on different types of sources. They might use census data to get information on the members of households; legal documents about marriage, divorce, inheritance, and household responsibilities; and newspapers, magazines, diaries, and other relevant literature that provides information on house- hold relations. After gathering the data, the researcher must then assess the validity of the data, analyze the collection of data to gain insight into the structure of families, and write a narrative that presents the evidence.
Not surprisingly, historical methods are able to provide insight into the characteristics and determinants of historical phenomena. Similar to ethnographic methods, they offer considerable insight into complex phenomena and processes. Indeed, historical and ethnographic methods are commonly used to analyze the same types of phenomena, the main difference being that ethnographic methods analyze contemporary examples whereas historical methods analyze examples from the past. In this way, data collection and type of data are commonly the only factors separating ethnographic methods from historical methods. Historical methods are also similar to ethnographic methods because their insight is limited to particular cases, as historical methods analyze particular phenomena in particular places at particular times. As a consequence, they are ill-suited for nomothetic explanations. Historical methods are also disadvantaged because they cannot generate their own data and therefore depend on the presence of historical sources. As a consequence, such methods cannot be used to analyze phenomena that lack appropriate data.

Comparative-Historical Methods in Comparative Perspective: Comparative-historical methods combine comparative and within-case methods, and therefore have affinities with both comparative methods and within-case/ideographic methods. Similar to statistical and experimental methods, comparative-historical methods employ comparison as a means of gaining insight into causal determinants. Similar to ethnographic and historical methods, comparative-historical methods explore the characteristics and causes of particular phenomena.
Comparative-historical analysis, however, does not simply combine the methods from other major methodological traditions—none of the major comparative methods is very common in comparative-historical analysis. Indeed, only a small—albeit growing—portion of comparative- historical analyses uses statistical comparison, and laboratory-style experimental methods are not used by comparative-historical researchers. There are several reasons for the marginal position of statistics within comparative-historical analysis. Most notably, within-case methods have been the dominant method used in the research tradition, and the basic logics of statistics and within-case methods differ considerably, with the former focusing on relationships between variables and the latter focusing on causal processes (see Mahoney and Goertz 2006).
As a consequence, comparative-historical researchers commonly avoid statistics and simply focus on causal processes. Additional reasons for the limited use of statistical comparison within the comparative-historical research tradition include the limited availability of historical data needed for appropriate statistical analyses and the small number of cases analyzed by comparative-historical researchers.
Experimental methods are excluded from comparative- historical analysis for two main reasons. First, comparative-historical scholars ask research questions about concrete real-world phenomena, such as, “What caused the American civil rights movement?” Controlled experiments, on the other hand, can only provide insight into general issues, for example, “Are people with higher or lower self-confidence more likely to retaliate aggressively?” Second, comparative-historical analysis focuses on social processes like revolutions, economic development, and state building that involve many people within a complex social environment over an extended period. For practical and moral reasons, such phenomena cannot be replicated in controlled environments.
Instead of statistics and controlled experiments, the most common comparative methods employed in comparative-historical analysis are small sample size comparisons. These comparisons usually explore how causal processes are similar and different and, in so doing, pay attention to the impact of context and causal mechanisms. Different from statistical comparison, such comparisons are usually not an independent source of insight, pursue insight that can be applied to a fewer number of cases, and focus on more specific details.
While using ethnographic methods and historical methods as sources of data, comparative-historical methods differ fundamentally from each in their analysis of the data. Whereas ethnographic and historical methods are primarily descriptive and document the characteristics of social phenomena, comparative-historical methods analyze data to offer insight into causal determinants. For this, comparative-historical researchers commonly employ three different within-case methods. These methods are all similar to those used by detectives and involve sorting through the available evidence and attempting to reconstruct causal scenarios in an effort to gain insight into the determinants of social phenomena. The three methods differ based on the type of inference they pursue: causal narrative offers insight into causal processes, process tracing explores causal mechanisms, and pattern matching tests theories.
Comparative-historical methods are also distinct from all other major methodological traditions in the social sciences because they have a much more extensive and diverse methodological toolkit. Indeed, comparative-historical methods include multiple types of comparative methods and multiple types of within-case methods. Researchers com- bine them in different ways and for different purposes. Considering the latter, they sometimes combine methods for triangulation, sometimes combine methods to exploit the strengths of different methods, and sometimes combine methods to improve the insight that can be gained by the individual methods.
Relatedly, comparative-historical methods are also unique because they pursue both ideographic and nomothetic explanations. The within- case methods offer ideographic insight, whereas the comparative methods offer more nomothetic insight. Their combination, in turn, weakens both the ideographic bent of the within-case methods and the nomothetic bent of the comparative methods, and pushes the researcher to consider both ideographic and nomothetic explanations.
Most comparative-historical researchers value ideographic explanations deeply, and both Skocpol and Somers (1980) and Tilly (1984) recognize ideographic explanation as a common goal of comparative- historical researchers. However, the methods of comparative-historical analysis are actually biased against this type of explanation. Most notably, comparative methods require multiple cases and usually seek insight that can be extended beyond a single case, and scholars pursuing purely ideographic explanations usually restrict their analysis to a single case. As a consequence, ideographic explanations are relatively rare in comparative-historical analysis but very common in history and historical sociology.
At the same time, comparisons highlight differences as well as similarities, and some comparative-historical researchers focus on the differences highlighted through comparison. These works show how each case is unique and does not follow any common logic. Notably, the failure of such comparisons to discover commonalities does not imply that ideographic explanations are in any way inferior to more nomothetic explanations. As Tilly notes, an ideographic explanation is not a “bungled attempt at generalization” (1984, 88). Instead, it seeks to find variation, not explain it. Such difference-oriented comparisons are valuable because they highlight the great diversity and complexity of the social world and show how social phenomena are commonly unique. They therefore serve as a corrective to comparative works that seek to stretch generalizations to the extreme. It is thus not by coincidence that most ideographic analyses in comparative-historical analysis seek to correct the universalism of different theories.
Comparative Historical “toolkit”: Besides comparative methods, comparative-historical scholars employ several different types of within-case methods. Ethnography can and is used for within-case analysis and is therefore part of the comparative- historical methodological toolkit. Comparative-historical researchers only rarely use it with great rigor, however, because ethnographic methods are usually used for very descriptive works that attempt to increase understanding about a particular group of people, their livelihoods, and their culture. Comparative-historical analysis, on the other hand, usually has a broader research agenda than understanding culture. As a consequence, ethnography can only provide partial insight into most of the questions asked by comparative-historical researchers. In addition, comparative-historical methods are used to analyze multiple cases, and ethnographic work is so intensive that it is very difficult to analyze more than one case in a single work.
  Besides comparative methods, comparative-historical scholars employ several different types of within-case methods:
 Ethnography
  Historical Methods
  Idiographic Methods
  Nomothetic Explanations
So what does this tool-kit look like? • Case-oriented. It focuses on the nation or other unit as a whole.
• Holistic. It is concerned with the context in which events occurred and the interrelations among different events and processes: “how different conditions or parts fit together” (Ragin, 1987:25–26).
• Conjunctural. This is because, it is argued, “no cause ever acts except in complex conjunctions with others”
(Abbot, 1994:101).
• Temporal. It becomes temporal by taking into account the related series of events that unfold over time.
So what does this tool-kit look like?
•  Historically specific. It is likely to be limited to the specific time(s) and place(s) studied, like traditional historical research.
• Narrative. It researches a story involving specific actors and other events occurring at the same time (Abbott, 1994:102), or one that takes account of the position of actors and events in time and in a unique historical context (Griffin, 1992).
• Inductive. The research develops an explanation for what happened from the details discovered about the past.

Historical Events Research & Event-Structure Analysis:
 It often utilizes a process known as Historical Events Research.
Historical events research is research on past events that does not follow processes for some long period of time—that is basically crosssectional—is historical events research rather than
historical process research.
Event Structure Analysis is a qualitative approach that relies on a systematic coding of key events or national characteristics to identify the underlying structure of action in a chronology
of events.
• Oral History can be useful for understanding historical events that occurred within the lifetimes of living individuals.
Steps of Event-Structure Analysis
An event structure analysis requires several steps:
(Example: an auto manufacturing plant that produces SUVs closes in your home town).
• Classifying historical information into discrete events: plant closes, worker strike, corporate
buy-out, gas prices increase.
• Ordering events into a temporal sequence: 1. gas prices increase 2. Corporate buy-out 3.
Worker strike 4. Plant closes.
• Identifying prior steps that are prerequisites for subsequent events: Oil embargo, political
crisis in oil producing nations, foreign produced fuel efficient cars become popular.
More on Oral Histories:
Another way to get a very rich understanding of how individuals experienced historical events is through oral history.
This requires intensive interviewing of individuals, which is an obtrusive measure. However, there are many oral histories
archived at university libraries and the Library of Congress. You might get lucky and find out that oral histories of community members were conducted during the time of the plant closing. You could qualitatively analyze the oral histories for information about how people perceived the plant closing and its importance to the community.
Historical Process Research:
Historical process research extends historical events research by focusing on a series of events that happened over a longer period of time. This longitudinal component allows for a much more complete understanding of historical developments than is often the case with historical events research.
Most likely to be qualitative and case-oriented (traditional history of country X)
• Case could be anything from
– A small community (village)
-- Even looking at event’s across the whole world!
Cross-Sectional Comparative Research:
Comparisons between countries during one time period can help social scientists identify the limitations of explanations based on single-nation research. These comparative studies may focus on a period in either the past or the present. Could be quantitative (!!!!) or qualitative.
Example: comparing the perceptions of events taking place in the recent conflicts between Ukraine and Russia from media reports from both sides.
Comparisons between countries during one time period can help social scientists identify the limitations of explanations based on single-nation research. These comparative studies may focus on a period in either the past or the present. Could be quantitative (!!!!) or qualitative.
Example: comparing the perceptions of events taking place in the recent conflicts between Ukraine and Russia from media reports from both sides.
 Cross-Sectional vs. Longitudinal:
Cross-Sectional Research – Takes place by looking at an event at one time point.
Longitudinal Research – Takes places by looking at several cross-sections to gain an understanding of the evolution of events and how they change over time.
These types of research are often utilized in comparative historical analysis but are also utilized in experimental and quasi-experimental research among most of the types of research covered by this course!
Problems With Historical Research
Reliance on Secondary vs. Primary sources of information (data).
Primary Sources – Collecting data from the individual source such as being present when an event (such as an important election) takes place. This is more verifiable because we are seeing the data for what it is.
Secondary Sources – Collecting data from others who have already collected the data such as news papers, magazines, and interviews.These sources of data are prone to the bias of the source, therefore the data may be somewhat inaccurate.Example: Hurst Publications and the Spanish American War
The Spanish-American War is often referred to as the first "media war." During the 1890s, journalism that sensationalized—and sometimes even manufactured—dramatic events was a powerful force that helped propel the United States into war with Spain. Led by newspaper owners William Randolph Hearst and Joseph Pulitzer, journalism of the 1890s used melodrama, romance, and hyperbole to sell millions of newspapers--a style that became known as yellow journalism.
The term yellow journalism came from a popular New York World comic called "Hogan's Alley," which featured a yellow-dressed character named the "the yellow kid." Determined to compete with Pulitzer's World in every way, rival New York Journal owner William Randolph Hearst copied Pulitzer's sensationalist style and even hired "Hogan's Alley" artist R.F. Outcault away from the World. In response, Pulitzer commissioned another cartoonist to create a second yellow kid. Soon, the sensationalist press of the 1890s became a competition between the "yellow kids," and the journalistic style was coined "yellow journalism."
Yellow journals like the New York Journal and the New York World relied on sensationalist headlines to sell newspapers. William Randolph Hearst understood that a war with Cuba would not only sell his papers, but also move him into a position of national prominence. From Cuba, Hearst's star reporters wrote stories designed to tug at the heartstrings of Americans. Horrific tales described the situation in Cuba--female prisoners, executions, valiant rebels fighting, and starving women and children figured in many of the stories that filled the newspapers. But it was the sinking of the battleship Maine in Havana Harbor that gave Hearst his big story--war. After the sinking of the Maine, the Hearst newspapers, with no evidence, unequivocally blamed the Spanish, and soon U.S. public opinion demanded intervention.
Today, historians point to the Spanish-American War as the first press-driven war. Although it may be an exaggeration to claim that Hearst and the other yellow journalists started the war, it is fair to say that the press fueled the public's passion for war. Without sensational headlines and stories about Cuban affairs, the mood for Cuban intervention may have been very different. At the dawn of the twentieth century, the United States emerged as a world power, and the U.S. press proved its influence.

View Documentary on the Spanish American War: https://www.youtube.com/watch?v=8g8NpQsmxj4
• Documents and other evidence may have been lost or damaged.
• Available evidence may represent a sample biased toward more newsworthy figures.
• Written records will be biased toward those who were more prone to writing.

• Feelings of individuals involved in past events may be hard, if not impossible, to reconstruct.


 “History is created by the victor.” Given this statement, do we believe that comparative historical research gives us a good grasp of what actually happened in the past?

Lecture 4: Comparative Historical Research
Adapted from  Lange (2013) Chapter 1; Pine Forge Press, an imprint of Sage Publications, 2004; & PBS.org
Comparative- Historical methods
Since the rise of the social sciences, researchers have used comparative- historical methods to expand insight into diverse social phenomena and, in so doing, have made great contributions to our understanding of the social world.
Comparative-Historical Analysis
Mahoney and Rueschemeyer (2003) refer to it as comparative-historical analysis in recognition of the tradi- tions growing multidisciplinary character. In addition to sociology, comparative-historical analysis is quite prominent in political science and is presentalbeit much more marginallyin history, economics, and anthropology.
4 types of comparative-historical research
Historical Events Research focuses on one short historical period (1 case, 1 time period)
Historical Process Research traces a sequence of events over a number of years (1 case, many time periods)
Cross-sectional Comparative Research -- comparing data from one time period between two or more nations (many cases, 1 time period)
Comparative Historical Research longitudinal comparative research (many cases) over a prolonged period of time
Comparative and Historical Research by number of cases and length of time studied
How do we understand Comparative Historical Research?
Historical Methods
Historical methods, also known as historiography, are the most common analytic techniques used in the discipline of history. They are generally used to explore either what happened at a particular time and place or what the characteristics of a phenomenon were like at a particular time and place.

Similar to statistical and experimental methods, comparative-historical methods employ comparison as a means of gaining insight into causal determinants. Similar to ethnographic and historical methods, comparative-historical methods explore the characteristics and causes of particular phenomena.
Comparative-historical analysis, however, does not simply combine the methods from other major methodological traditionsnone of the major comparative methods is very common in comparative-historical analysis.
As a consequence, comparative-historical researchers commonly avoid statistics and simply focus on causal processes. Additional reasons for the limited use of statistical comparison within the comparative-historical research tradition include the limited availability of historical data needed for appropriate statistical analyses and the small number of cases analyzed by comparative-historical researchers.
Comparative Historical toolkit” 
Besides comparative methods, comparative-historical scholars employ several different types of within-case methods: Ethnography Historical Methods Idiographic Methods Nomothetic Explanations So what does this tool-kit look like?
Well, comparative historical research can be: 
Holistic. It is concerned with the context in which events occurred and the interrelations among different events and processes: how different conditions or parts fit together (Ragin, 1987:2526).
Conjunctural. This is because, it is argued, no cause ever acts except in complex conjunctions with others(Abbot, 1994:101).
Temporal. It becomes temporal by taking into account the related series of events that unfold over time.
So what does this tool-kit look like?
  Historically specific. It is likely to be limited to the specific time(s) and place(s) studied, like traditional historical research.
Narrative. It researches a story involving specific actors and other events occurring at the same time (Abbott, 1994:102), or one that takes account of the position of actors and events in time and in a unique historical context (Griffin, 1992).
Inductive. The research develops an explanation for what happened from the details discovered about the past.
Historical Events Research & Event-Structure Analysis
It often utilizes a process known as Historical Events Research.
Historical events research is research on past events that does not follow processes for some long period of timethat is basically cros-ssectionalis historical events research rather than historical process research.
Event Structure Analysis is a qualitative approach that relies on a systematic coding of key events or national characteristics to identify the underlying structure of action in a chronology of events.
Oral History can be useful for understanding historical events that occurred within the lifetimes of living individuals.
Steps of Event-Structure Analysis
An event structure analysis requires several steps:
(Example: an auto manufacturing plant that produces SUVs closes in your home town).
Classifying historical information into discrete events: plant closes, worker strike, corporate buy-out, gas prices increase.
Ordering events into a temporal sequence: 1. gas prices increase 2. Corporate buy-out 3.Worker strike 4. Plant closes.
Identifying prior steps that are prerequisites for subsequent events: Oil embargo, political crisis in oil producing nations, foreign produced fuel efficient cars become popular.
More on Oral Histories
Another way to get a very rich understanding of how individuals experienced historical events is through oral history.
This requires intensive interviewing of individuals, which is an obtrusive measure. However, there are many oral histories archived at university libraries and the Library of Congress. You might get lucky and find out that oral histories of community members were conducted during the time of the plant closing. You could qualitatively analyze the oral histories for information about how people perceived the plant closing and its importance to the community.
Historical Process Research
Most likely to be qualitative and case-oriented (traditional history of country X)
Case could be anything from
A small community (village)
-- Even looking at events across the whole world!
Cross-Sectional Comparative Research
Comparisons between countries during one time period can help social scientists identify the limitations of explanations based on single-nation research. These comparative studies may focus on a period in either the past or the present. Could be quantitative (!!!!) or qualitative.
Example: comparing the perceptions of events taking place in the recent conflicts between Ukraine and Russia from media reports from both sides.
Cross-Sectional vs. Longitudinal
Cross-Sectional Research Takes place by looking at an event at one time point.
Longitudinal Research Takes places by looking at several cross-sections to gain an understanding of the evolution of events and how they change over time.
These types of research are often utilized in comparative historical analysis but are also utilized in experimental and quasi-experimental research among most of the types of research covered by this course!
Problems With Historical Research
Reliance on Secondary vs. Primary sources of information (data).
Primary Sources Collecting data from the individual source such as being present when an event (such as an important election) takes place. This is more verifiable because we are seeing the data for what it is .
Secondary Sources Collecting data from others who have already collected the data such as news papers, magazines, and interviews.
These sources of data are prone to the bias of the source, therefore the data may be somewhat inaccurate.
Yellow Journalism
Example: Hearst Publications and the Spanish American War
Problems (continued)
Documents and other evidence may have been lost or damaged.
Available evidence may represent a sample biased toward more newsworthy figures.
Written records will be biased toward those who were more prone to writing.
Feelings of individuals involved in past events may be hard, if not impossible, to reconstruct.
The Case Study Method
Lecture 5 Adapted from Miller (2014) & University of Kentucky, (2014), Psychology Press (2004), Huitt & Kaeck (1999)
 This is especially the case when it comes to individual differences. This has always been the tradeoff within research in the social sciences, whether your research is field or laboratory research, we always loose valuable information about individual variation when we try to distill down experiences, emotions, and behaviors into common experiences that we can measure numerically and generalize across a population. So what is an alternative that at the very least tells us a lot about the individual and gives us a very detailed picture of an individual, but cannot necessarily generalize to the population at large? Well this brings us to today’s lecture: the case study method.
2) Working Definition
Case Study Method: “An empirical inquiry about a contemporary phenomenon (e.g., a “case”), set within its real-world context—especially when the boundaries between phenomenon and context are not clearly evident (Yin, 2009a, p. 18; SagePub, 2014)’
 The case study method embraces the full set of procedures needed to do case study research. These tasks include designing a case study, collecting the study’s data, analyzing the data, and presenting and reporting the results All case study research starts from the same compelling feature: the desire to derive an up-close or otherwise in-depth understanding of a single or small number of “cases,” set in their real-world contexts (e.g., Bromley, 1986, p. 1).  The closeness aims to produce an invaluable and deep understanding—that is, an insightful appreciation of the “case(s)”—hopefully resulting in new learning about real-world behavior and its meaning. The distinctiveness of the case study, therefore, also serves as its abbreviated definition:
3) Assumptions
“Among other features, case study research assumes that examining the context and other complex conditions related to the case(s) being studied are integral to understanding the case(s) (SagePub, 2014).”
Thus, among other features, case study research assumes that examining the context and other complex conditions related to the case(s) being studied are integral to understanding the case(s).  The in-depth focus on the case(s), as well as the desire to cover a broader range of contextual and other complex conditions, produce a wide range of topics to be covered by any given case study. In this sense, case study research goes beyond the study of isolated variables. As a by-product, and as a final feature in appreciating case study research, the relevant case study data are likely to come from multiple and not singular sources of evidence.
4) When to use the Case Study Method
“First and most important, the choices among different research methods, including the case study method, can be determined by the kind of research question that a study is trying to address (e.g., Shavelson & Towne, 2002, pp. 99–106, SagePub, 2014).”
“Second, by emphasizing the study of a phenomenon within its real-world context, the case study method favors the collection of data in natural settings, compared with relying on “derived” data (Bromley, 1986, p. 23, SagePub, 2014)”
Third, the case study method is now commonly used in conducting evaluations (SagePub, 2014).
Concerns and Problems Often considered the “method of last resort”
At least three situations create relevant opportunities for applying the case study method as a research method. First and most important, the choices among different research methods, including the case study method, can be determined by the kind of research question that a study is trying to address (e.g., Shavelson & Towne, 2002, pp. 99–106). Accordingly, case studies are pertinent when your research addresses either a descriptive question—“What is happening or has happened?”—or an explanatory question—“How or why did something happen?” As contrasting examples, alternative research methods are more appropriate when addressing two other types of questions: an initiative’s effectiveness in producing a particular outcome (experiments and quasi-experiments address this question) and how often something has happened (surveys address this question). However, the other methods are not likely to provide the rich descriptions or the insightful explanations that might arise from doing a case study. Second, by emphasizing the study of a phenomenon within its real-world context, the case study method favors the collection of data in natural settings, compared with relying on “derived” data (Bromley, 1986, p. 23)—for example, responses to a researcher’s instruments in an experiment or responses to questionnaires in a survey.
For instance, education audiences may want to know about the following:
• How and why a high school principal had done an especially good job
• The dynamics of a successful (or unsuccessful) collective bargaining
negotiation with severe consequences (e.g., a teachers’ strike)
• Everyday life in a special residential school Third, the case study method is now commonly used in conducting evaluations.
Authoritative sources such as the U.S. Government Accountability Office (1990) and others (e.g., Yin, 1992, 1994, 1997) have documented the many evaluation applications of the case study method.
5) Concerns and Problems
Often considered the “method of last resort”
Lack of trust in procedures and processes Inability to generalize results  When done poorly problems increase.
 Despite its apparent applicability in studying many relevant real-world situations  and addressing important research questions, case study research nevertheless has not achieved widespread recognition as a method of choice. Some people actually think of it as a method of last resort. Why is this?

Part of the notoriety comes from thinking that case study research is the exploratory phase for using other social science methods (i.e., to collect some data to determine whether a topic is indeed worthy of further investigation). In this mode, case study research appears to serve only as a prelude. As a result, it may not be considered as involving a serious, much less rigorous, inquiry. However, such a traditional  and sequential (if not hierarchical) view of social science methods is entirely outdated. Experiments and surveys have their own exploratory modes, and case study research goes well beyond exploratory functions. In other words, all the methods can cover the entire range of situations, from initial exploration to the completion of full and final authoritative studies, without calling on any other methods.

A second part of the notoriety comes from a lack of trust in the credibility of a case study researcher’s procedures. They may not seem to protect sufficiently against such biases as a researcher seeming to find what she or he had set out to find. They also may suffer from a perceived inability to generalize the case study’s findings to any broader level.

Indeed, when case study research is done poorly, these and other challenges can come together in a negative way, potentially re-creating conventional prejudices against the case study method. In contrast, contemporary case study research calls for meeting these challenges by using more systematic procedures.
Case study research involves systematic data collection and analysis procedures, and case study findings can be generalized to other situations through analytic (not statistical) generalization. (see Yin, 2009a, pp. 40–45).

6)Three Steps in designing a “Case”
 Define the Case  Select one of 4 types of case study designs  Use theory in design work Explicitly attending to the design of your case study serves as the first important way of using more systematic procedures when doing case study research. The needed design work contrasts sharply with the way that many people may have stumbled into doing case studies in an earlier era. When doing contemporary case studies, three steps provide a helpful framework for the minimal design work.
The first step is to define the “case” that you are studying. Arriving at even a tentative definition helps enormously in organizing your case study. Generally, you should stick with your initial definition because you might have reviewed literature or developed research questions specific to this definition. However, a virtue of the case study method is the ability to redefine the “case” after collecting some early data. Such shifts should not be suppressed. However, beware when this happens— you may then have to backtrack, reviewing a slightly different literature and possibly revising the original research questions.
A “case” is generally a bounded entity (a person, organization, behavioral condition, event, or other social phenomenon), but the boundary between the case and its contextual conditions—in both spatial and temporal dimensions—may be blurred, as previously noted. The case serves as the main unit of analysis in a case study. At the same time, case studies also can have nested units within the main unit (see “embedded subcases” in the next section).
In undertaking the definitional task, you should set a high bar: Think of the possibility that your case study may be one of the few that you ever complete.
You might, therefore, like to put your efforts into as important, interesting, or significant a case as possible.
What makes a case special? One possibility arises if your case covers some distinctive if not extreme, unique, or revelatory event or subject, such as
• the revival or renewal of a major organization,
• the creation and confirmed efficacy of a new medical procedure,
• the discovery of a new way of reducing gang violence,
• a critical political election,
• some dramatic neighborhood change, or even
• the occurrence and aftermath of a natural disaster.
By definition, these are likely to be remarkable events. To do a good case study of them may produce an exemplary piece of research.
If no such distinctive or unique event is available for you to study, you may want to do a case study about a common or everyday phenomenon. Under these circumstances, you need to define some compelling theoretical framework for selecting your case. The more compelling the framework, the more your case study can contribute to the research literature. In this sense, you will have conducted a “special” case study. One popular theme is to choose an otherwise ordinary case that has nevertheless been associated with some unusually successful outcome.
2. Selecting One of Four Types of Case Study Designs A second step calls for deciding whether your case study will consist of a single or multiple cases—what then might be labeled as a single- or a multiple-case study.1 Whether single or multiple, you also can choose to keep your case holistic or to have embedded subcases within an overall holistic case. The resulting two-by-two matrix leads to four different case study designs. These, together with the dashed lines representing the blurred boundary between a case and its context, are illustrated in Figure 1.1. For example, your holistic case might be about how and why an organization implemented certain staff promotion policies (holistic level), but the study also might include data collected about a group of employees—whether from a sample survey, from an analysis of the employees’ records, or from some other source (the embedded level).
2  If you were limited to a single organization, you would have an embedded, single-case study. If you studied two or more organizations in the same manner, you would have an embedded, multiple-case study.
The multiple-case design is usually more difficult to implement than a single case design, but the ensuing data can provide greater confidence in your findings.
The selection of the multiple cases should be considered akin to the way that you would define a set of multiple experiments—each case (or experiment) aiming to examine a complementary facet of the main research question. Thus, a common multiple-case design might call for two or more cases that deliberately tried to test the conditions under which the same findings might be replicated.
Alternatively, the multiple cases might include deliberately contrasting cases. As an important note, the use of the term replication in relation to multiple case designs intentionally mimics the same principle used in multiple experiments (e.g., Hersen & Barlow, 1976). In other words, the cases in a multiple-case study, as in the experiments in a multiple-experiment study, might have been selected either to predict similar results (direct replications) or to predict contrasting results but for anticipatable reasons (theoretical replications).
An adjunct of the replication parallelism is the response to an age-old question:
“How many cases should be included in a multiple-case study?” The  question continues to plague the field to this day (e.g., Small, 2009). Students and scholars appear to assume the existence of a formulaic solution, as in conducting a power analysis to determine the needed sample size in an experiment or survey. For case studies (again, as with multiple experiments) no such formula exists. Instead, analogous to the parallel question of “how many experiments need to be conducted to arrive at an unqualified result,” the response is still a judgmental one: the more cases (or experiments), the greater confidence or certainty in a study’s findings; and the fewer the cases (or experiments), the less confidence or certainty.
More important, in neither the case study nor the experimental situation would a tallying of the cases (or the experiments) provide a useful way for deciding whether the group of cases (or experiments) supported an initial proposition or not. Thus, some investigators of multiple-case studies might think that a cross case analysis would largely consist of a simple tally (e.g., “Five cases supported the proposition, but two did not”) as the way of arriving at a cross-case  conclusion.
However, the numbers in any such tally are likely to be too small and undistinguished to support such a conclusion with any confidence.
3. Using Theory in Design Work A third step involves deciding whether or not to use theory to help complete your essential methodological steps, such as developing your research question(s), selecting your case(s), refining your case study design, or defining the relevant data to be collected. (The use of theory also can help organize your initial data analysis strategies and generalize the findings from your case study—discussed later in this chapter.)
For example, an initial theoretical perspective about school principals might claim that successful principals are those who perform as “instructional leaders.”
A lot of literature (which you would cite as part of your case study) supports this perspective. Your case study could attempt to build, extend, or challenge this perspective, possibly even emulating a hypothesis-testing approach. However, such a theoretical perspective also could limit your ability to make discoveries  (i.e., to discover from scratch just how and why a successful principal had been successful). Therefore, in doing this and other kinds of case studies, you would need to work with your original perspective but also be prepared to discard it after initial data collection.
Nevertheless, a case study that starts with some theoretical propositions or theory will be easier to implement than one having no propositions. The theoretical propositions should by no means be considered with the formality of grand theory in social science but mainly need to suggest a simple set of relationships such as “a [hypothetical] story about why acts, events, structures, and thoughts occur” (Sutton & Staw, 1995, p. 378). More elaborate theories will (desirably) point to more intricate patterns. They (paradoxically) will add precision to the later analysis, yielding a benefit similar to that of having more complex theoretical propositions when doing quasi-experimental research (e.g., Rosenbaum, 2002, pp. 5–6, 277–279). As an example, in case study evaluations, the use of logic models represents a theory about how an intervention is supposed to work.
This desired role of theory sometimes serves as one point of difference between case study research and related qualitative methods such as ethnography (e.g., Van Maanen, 1988) and grounded theory (e.g., Corbin & Strauss, 2007). For instance, qualitative research may not necessarily focus on any “case,” may not be concerned with a unit of analysis, and may not engage in formal design work, much less encompass any theoretical perspective.
In general, the less experience you have had in doing case study research, the more you might want to adopt some theoretical perspectives. Without them, and without adequate prior experience, you might risk false starts and lost time in doing your research. You also might have trouble convincing others that your case study has produced findings of much value to the field. At the same time, the opposite tactic of deliberately avoiding any theoretical perspective, though risky, can be highly rewarding—because you might then be able to produce a
“break-the-mold” case study.
7) Data Collection
1. Direct observations (e.g., human actions or a physical environment)
2. Interviews (e.g., open-ended conversations with key participants)
3. Archival records (e.g., student records)
4. Documents (e.g., newspaper articles, letters and e-mails, reports)
5. Participant-observation (e.g., being identified as a researcher but also filling a real-life role in the scene being studied)
6. Physical artifacts (e.g., computer downloads of employees’ work)
Varieties of Sources of Case Study Data Case study research is not limited to a single source of data, as in the use of questionnaires for carrying out a survey. In fact, good case studies benefit from having multiple sources of evidence. This slide lists six common sources of evidence. You may use these six in any combination, as well as related sources such as focus groups (a variant of interviews), depending on what is available and relevant for studying your case(s).
Regardless of its source, case study evidence can include both qualitative and quantitative data. Qualitative data may be considered non-numeric data—for instance, categorical information that can be systematically collected and presented in narrative form, such as word tables.
Quantitative data can be considered numeric data—for instance, information based on the use of ordinal if not interval or ratio measures.
Direct Observations: Two Examples
Let’s start with one of the most common methods: making direct observations in a field setting. Such observations can focus on human actions, physical environments, or real-world events. If nothing else, the opportunity to make such observations is one of the most distinctive features in doing case studies.
As an initial example, the conventional manner of collecting observational data takes the form of using your own five senses, taking field notes, and ultimately creating a narrative based on what you might have seen, heard, or otherwise sensed. (The application in Chapter 2 provides an example of such a narrative.) Mechanical devices such as audiotape recorders or audio-video cameras also can help.
Based on these observations, the composing of the narrative must overcome the caveat discussed earlier by presenting the observational evidence along with a careful note: whether the presentation represents your trying to be as neutral and factual as possible, whether it represents the view of (one or more of) the field participants in your case study, or whether it represents your own deliberate interpretation of what has been observed. Any of the three is acceptable, depending on the goal of your data collection, but you must explicitly clarify which of the three is being presented and avoid confusing them inadvertently. Once properly labeled, you even may present information from two different points of view, again depending on the goal of your data collection and case study.
Besides this traditional observational procedure, a second way of making direct observations comes from using a formal observational instrument and then noting, rating, or otherwise reporting the observational evidence under the categories specified by the instrument. Use of a formal workplace instrument, aimed at defining the frequency and nature of supervisor-employee interactions, is a
commonplace practice in doing management research. Such an instrument allows the observational evidence to be reported in both narrative and tabular forms (e.g., tables showing the frequency of certain observations). In a similar manner, a formal instrument can be used to define and code other observed interactions, such as in a study of the two-way dialogue between a doctor and a patient or
between a teacher and a class. In any of these situations, the interactions may have been observed directly or recorded with an audio-visual device.
Open-Ended Interviews A second common source of evidence for case studies comes from open-ended interviews, also called “nonstructured interviews.” These interviews can offer richer and more extensive material than data from surveys or even the open ended portions of survey instruments. On the surface, the open-ended portions of surveys may resemble open-ended interviews, but the latter are generally less structured and can assume a lengthy conversational mode not usually found in surveys. For instance, the open-ended interviews in case studies can consume two or more hours on more than a single occasion. Alternatively, the conversations can occur over the course of an entire day, with a researcher and one or more participants accompanying one another to view or participate in different events.
The flexible format permits open-ended interviews, if properly done, to reveal how case study participants construct reality and think about situations, not just to provide the answers to a researcher’s specific questions and own implicit construction of reality. For some case studies, the participants’ construction of reality provides important insights into the case. The insights gain even further value if the participants are key persons in the organizations, communities, or small groups being studied, not just the average member of such groups. For a case study of a public agency or private firm, for instance, a key person would be the head of the agency or firm. For schools, the principal or a department head would carry the same status. Because by definition only one or a few persons will fill such roles, their interviews also have been called “elite” interviews.
Archival Records In addition to direct observations and open-ended interviews, a third common source consists of archival data—information stored in existing channels such as electronic records, libraries, and old-fashioned (paper) files. Newspapers, television, and the mass media are but one type of channel. Records maintained by public agencies, such as public health or law enforcement or court records, serve as another. The resulting archival data can be quantitative or qualitative (or both).
From a research perspective, the archival data can be subject to their own biases or shortcomings. For instance, researchers have long known that police records of reported crime do not reflect the actual amount of crime that might have occurred. Similarly, school systems’ reports of their enrollment, attendance, and dropout rates may be subject to systematic under- or over counting. Even the U.S. Census struggles with the completeness of its population counts and the potential problems posed because people residing in certain kinds of locales (rural and urban) may be undercounted.
Likewise, the editorial leanings of different mass media are suspected to affect their choice of stories to be covered (or not covered), questions to be asked (or not asked), and textual detail (or lack of detail). All these editorial choices can collectively produce a systematic bias in what would otherwise appear to be a full and factual account of some important event.
8)Evidence from multiple sources
Triangulation
Literature Review
Direct Observation
* You are always better off using multiple rather than single sources of evidence. The availability of data from the preceding as well as the three other common sources in the last slide creates an important opportunity during case study data collection: You should constantly check and recheck the consistency of the findings from different as well as the same sources (e.g., Duneier, 1999, pp. 345–347).
In so doing, you will be triangulating—or establishing converging lines of evidence—which will make your findings as robust as possible.
How might this triangulation work? The most desired convergence occurs when three (or more) independent sources all point to the same set of events, facts, or interpretations. For example, what might have taken place at a group meeting might have been reported to you (independently) by two or more attendees at the meeting, and the meeting also might have been followed by some documented outcome (e.g., issuance of a new policy that was the presumed topic of the meeting). You might not have been able to attend the meeting yourself, but having these different sources would give you more confidence about concluding what had transpired than had you relied on a single source alone.
Triangulating is not always as easy as the preceding example. Sometimes, as when you interview different participants, all appear to be giving corroborating evidence about how their organization works—for example, how counselors treat residents in a drug treatment facility. But in fact, they all may be echoing the same institutional “mantra,” developed over time for speaking with outsiders (such as researchers or media representatives), and the collective “mantra” may not necessarily coincide with the organization’s actual practices.
Reviewing the literature may help you anticipate this type of situation, and making your own direct observations also may be extremely helpful. However, when relying on direct observations, note that another problem can arise. Because you may have prescheduled your presence in a field setting, the participant(s) may have had the opportunity to customize their routines just for you. So, getting at the actual practices in the organization or among a group of people may not be as easy as you might think. Nevertheless, you always will be better off using multiple rather than single sources of evidence.
9)Case Study Protocol
The typical protocol consists of a set of questions to be addressed while collecting the case study data (whether actually taking place at a field setting or at your own desk when extracting information from an archival source).
Importantly, the questions in the protocol are directed at the researcher, not at any field participant. In this sense, the protocol differs entirely from any instrument used in a conventional interview or survey. The protocol’s questions in effect serve as a mental framework,
In collecting your data, and regardless of your sources of evidence, you will
find the development and use of a case study protocol to be extremely helpful, if
not essential. The typical protocol consists of a set of questions to be addressed
while collecting the case study data (whether actually taking place at a field setting
or at your own desk when extracting information from an archival source). Importantly, the questions in the protocol are directed at the researcher, not at any field participant. In this sense, the protocol differs entirely from any instrument used in a conventional interview or survey. The protocol’s questions in effect serve as a mental framework, not unlike similar frameworks held by detectives investigating crimes, by journalists chasing a story, or by clinicians considering different diagnoses based on a patient’s symptoms. In those situations, a detective, journalist, or clinician may privately entertain one or more lines of inquiry (including rival hypotheses), but the specific questions posed to any participant are tuned to each specific interview situation. Thus, the questions as actually verbalized in an interview derive from the line of inquiry (e.g., mental framework) but do not come from a verbatim script (e.g., questionnaire).
10) Collect Data for Rival Explanations
·         A final data collection topic stresses the role of seeking data to examine rival explanations. The desired rival thinking should draw from a continual sense of skepticism as a case study proceeds.
·         During data collection, the skepticism should involve worrying about whether events and actions are as they appear to be and whether participants are giving candid responses.
Having a truly skeptical attitude will result in collecting more data than if rivals were not a concern. A final data collection topic stresses the role of seeking data to examine rival explanations. The desired rival thinking should draw from a continual sense of skepticism as a case study proceeds. During data collection, the skepticism should involve worrying about whether events and actions are as they appear to be and whether participants are giving candid responses. Having a truly skeptical attitude will result in collecting more data than if rivals were not a concern. For instance, data collection should involve a deliberate and vigorous search for “discrepant evidence,” as if you were trying to establish the potency of the plausible rival rather than seeking to discredit it (Patton, 2002, p. 276; Rosenbaum, 2002, pp. 8–10). Finding no such evidence despite a diligent search again increases confidence about your case study’s later descriptions, explanations, and interpretations.
Rival explanations are not merely alternative interpretations. True rivals compete directly with each other and cannot coexist. In other words, research interpretations may be likened to a combatant who can be challenged by one or more rivals. Rivals that turn out to be more plausible than an original interpretation need to be rejected, not just footnoted.
Case study research demands the seeking of rival explanations throughout the research process. Interestingly, the methodological literature offers little inkling of the kinds of substantive rivals that might be considered by researchers, either in doing case study research or other kinds of social science research. The only rivals to be found are methodological but not substantive ones—for instance, involving the null hypothesis, experimenter effects, or other potential artifacts created by the research procedures.
In contrast, in detective work, a substantive rival would be an alternative explanation of how a crime had occurred, compared with the explanation that might originally have been  entertained.

11) Presenting your Case
  You need to present the evidence in your case study with sufficient clarity (e.g., in separate texts, tables, and exhibits) to allow readers to judge independently your later interpretation of the data.
Ideally, such evidence will come from a formal case study database that you compile for your files after completing your data collection.
Properly dealing with case study evidence requires a final but essential practice:
You need to present the evidence in your case study with sufficient clarity (e.g., in separate texts, tables, and exhibits) to allow readers to judge independently your later interpretation of the data. Ideally, such evidence will come from a formal case study database that you compile for your files after completing your data collection.
Unfortunately, older case studies frequently mixed evidence and interpretation.
This practice may still be excusable when doing a unique case study or a revelatory
case study, because the insights may be more important than knowing the strength of the evidence for such insights. However, for most case studies, mixing evidence and interpretation may be taken as a sign that you do not understand the difference between the two or that you do not know how to handle data (and hence proceeded prematurely to interpretation)
12)  Case Study Data Analysis
Whether using computer software to help you or not, the researcher will be the one who must define the codes to be used and the procedures for logically piecing together the coded evidence into broader themes—in essence creating your own unique algorithm befitting your particular case study. The strength of the analytic course will depend on a marshaling of claims that use your data in a logical fashion.
Your analysis can begin by systematically organizing your data (narratives and words) into hierarchical relationships, matrices, or other arrays (e.g., Miles & Huberman, 1994).
Case study analysis takes many forms, but none yet follow the routine procedures that may exist with other research methods. The absence of any cookbook for analyzing case study evidence has been only partially offset by the development of prepackaged computer software programs. They can support the analysis of large amounts of narrative text by following your instructions in coding and categorizing your notes or your verbatim transcripts. However, unlike software for analyzing numeric data, whereby an analyst provides the input data and the computer uses an algorithm to estimate some model and proceeds to produce the output data, there is no automated algorithm when analyzing narrative data. Whether using computer software to help you or not, you will be the one who must define the codes to be used and the procedures for logically piecing together the coded evidence into broader themes—in essence creating your own unique algorithm befitting your particular case study. The strength of the analytic course will depend on a marshaling of claims that use your data in a logical fashion. Your analysis can begin by systematically organizing your data (narratives and words) into hierarchical relationships, matrices, or other arrays (e.g., Miles & Huberman, 1994). A simple array might be a word table, organized by some rows and columns of interest and presenting narrative data in the cells of the table.
Given this or other arrays, several different analytic techniques can then be used (see Yin, 2009a, pp. 136–161, for a fuller discussion). Discussed next are four examples. The first three are pattern matching, explanation building, and time series analysis. Multiple-case studies, in addition to using these several techniques within each single case, would then follow a replication logic, which is the fourth technique.
13) Techniques
Pattern-Matching
Open-Ended Questions
Time-Series-Like Analysis
If selecting your case(s) to be studied is the most critical step in doing case study research, analyzing your case study data is probably the most troublesome.
Much of the problem relates to false expectations: that the data will somehow “speak for themselves,” or that some counting or tallying procedure will be sufficient to produce the main findings for a case study. Wrong. Instead, consider the following alternatives. You actually made some key assumptions for your analysis when you defined your research questions and your case. Was your motive in doing the case study mainly to address your research questions? If so, then the techniques for analyzing the data might be directed at those questions first. Was your motive to derive more general lessons for which your case(s) are but examples? If so, your analysis might be directed at these lessons. Finally, if your case study was driven by a discovery motive, you might start your analysis with what you think you have discovered.
Now comes a “reverse” lesson. Realizing that key underlying assumptions for later analysis may in fact have been implicit at the initial stages of your case study, you could have anticipated and planned the analytic strategies or implications when conducting those initial stages. Collecting the actual data may lead to changes in this plan, but having an initial plan that needs to be revised (even drastically) may be better than having no plan at all.
For instance, one possibility is to stipulate some pattern of expected findings at the outset of your case study. A pattern-matching logic would later enable you to compare your empirically based pattern (based on the data you had collected) with the predicted one. The prediction in a community study might have stipulated that the patterns of outcomes in many different economic and social sectors (e.g., retail sales, housing sales, unemployment, and population turnover) would be “catastrophically” affected by a key event— the closing of a military base in a small, single-employer town (Bradshaw, 1999). The analysis would then examine the data in each sector, comparing pre-post trends with those in other communities and statewide trends. The pattern-matching results should be accompanied by a detailed explanation of how and why the base closure had (or had not) affected these trends. By also collecting data on and then examining possible rival explanations (e.g., events co-occurring with the key event or other contextual conditions), support for the claimed results would be strengthened even further.
Second, a case study may not have started with any predicted patterns but in fact may have started with an open-ended research question that would lead to the use of an explanation-building technique. The purpose of the case study here would be to build an explanation for the issue and then again to deliberately entertain rival explanations.
A third technique mimics the time-series analyses in quantitative research. In case study research, the simplest time series can consist of assembling key events into a chronology. The resulting array (e.g., a word table consisting of time and types of events as the rows and columns) may not only produce an insightful descriptive pattern but also may hint at possible causal relationships, because any presumed causal condition must precede any presumed outcome condition. Assuming again the availability of data about rival hypotheses, such information would be used in examining the chronological pattern. When the rivals do not fit the pattern, their rejection considerably strengthens the basis for supporting your original claims.
If the case study included some major intervening event in the midst of the chronological sequence, the array could serve as a counterpart to an interrupted time series in experimental research. For instance, imagine a case study in which a new executive assumed leadership over an organization. The case study might have tracked the production, sales, and profit trends before and after the executive’s ascendance. If all the trends were in the appropriate upward direction, the case study could begin to build a claim, crediting the new leader with these accomplishments. Again, attending to rival conditions (such as that earlier policies might have been put into place by the new executive’s predecessor) and  making them part of the analysis would further strengthen the claim.
14) What about when quantitative data is available?
Correlations between single case studies can be used only if there is sufficient data to run a statistical analysis.
The preceding example was deliberately limited to a situation where a case study did not attempt any statistical analysis, mainly because of a lack of data points other than some simple pre-post comparison. However, case study analyses can assume a different posture when more time intervals are relevant and
sufficient data are available. In education, a common single-case design might focus on a school or school district as a single organization of interest (e.g., Supovitz & Taylor, 2005; Yin & Davis, 2007).

Within the single case, considerable attention might be devoted to the collection and analysis of highly quantitative student achievement data. For instance, a study of a single school district tracked student performance over a 22-year period (Teske, Schneider, Roch, & Marschall, 2000).

The start of the period coincided with a time when the district was slowly implementing an educational reform that was the main subject of the study. The available data then permitted the case study to use statistical models in reading and in mathematics to test the correlation between reform and student performance.
15) Meta Analysis: A study of multiple Case Studies
1.      The logic for such a cross-case synthesis emulates that used in addressing whether the findings from a set of multiple experiments—too small in number to be made part of any quantitative meta-analysis (a study of the results of other studies)—support any broader pattern of conclusions.
2.      The replication or corroboratory frameworks can vary. In a direct replication, the single cases would be predicted to arrive at similar results.

Discussed earlier was the desire to apply a replication logic in interpreting the findings across the cases in a multiple-case study. The logic for such a cross-case synthesis emulates that used in addressing whether the findings from a set of multiple experiments—too small in number to be made part of any quantitative meta-analysis—support any broader pattern of conclusions.
The replication or corroboratory frameworks can vary. In a direct replication,
the single cases would be predicted to arrive at similar results.   In a theoretical replication, each single case’s ultimate disposition also would have been predicted beforehand, but each case might have been predicted to produce a varying or even contrasting result, based on the preconceived propositions. Even more complex could be the stipulation and emergence of a typology of cases based on a multiple-case study.

16) Can we generalize at all from a Case Study?
  To the extent that any study concerns itself with generalizing, case studies tend to generalize to other situations (on the basis of analytic claims), whereas surveys and other quantitative methods tend to generalize to populations (on the basis of statistical claims).
determine whether you can make any generalizations from your case study. One available procedure applies well to all kinds of case studies, including the holistic, single-case study that has been commonly criticized for having little or no generalizability value. To understand the process requires distinguishing between two types of generalizing: statistical generalizations and analytic generalizations (Yin, 2009a, pp. 38–39). For case study research, the latter is the appropriate type.
Unfortunately, most scholars, including those who do case study research, are imbued with the former type. They think that each case represents a sampling point from some known and larger population and cannot understand how a small set of cases can generalize to any larger population. The simple answer is that a single or small set of cases cannot generalize in this manner, nor is it intended to. Furthermore, the incorrect assumption is that statistical generalizations, from samples to universes, are the only way of generalizing findings from social science research.
In contrast, analytic generalizations depend on using a study’s theoretical framework to establish a logic that might be applicable to other situations. Again, an appealing parallel exists in experimental science, where generalizing about the findings from a single or small set of experiments does not usually follow any statistical path to a previously defined universe of experiments.
4  Rather, for both case studies and experiments, the objective for generalizing the findings is the same two-step process, as follows.
The first step involves a conceptual claim whereby investigators show how their study’s findings have informed the relationships among a particular set of concepts, theoretical constructs, or sequence of events. The second step involves applying the same theoretical propositions to implicate other situations, outside the completed case study, where similar concepts, constructs, or sequences might be relevant. For example, political science’s best-selling research work has been a single-case study about the Cuban missile crisis of 1962 (Allison, 1971; Allison & Zelikow, 1999). The authors do not generalize their findings and theoretical framework to U.S.-Cuban relations—or to the use of missiles. They use their theoretical propositions to generalize their findings to the likely responses of national governments when involved in superpower confrontation and international crises.
Making analytic generalizations requires carefully constructed claims (e.g.,Kelly & Yin, 2007)—again, whether for a case study or for an experiment. The ultimate generalization is not likely to achieve the status of “proof” in geometry,5 but the claims must be presented soundly and resist logical challenge. The relevant “theory” may be no more than a series of hypotheses or even a single hypothesis. Cronbach (1975) further clarifies that the sought-after generalization is not that of a conclusion but, rather, more like a “working hypothesis” (also see Lincoln & Guba, 1985, pp. 122–123). Confidence in such hypotheses can then build as new case studies—again, as with new experiments—continue to produce findings related to the same theoretical propositions.
In summary, to the extent that any study concerns itself with generalizing, case
studies tend to generalize to other situations (on the basis of analytic claims),
whereas surveys and other quantitative methods tend to generalize to populations (on the basis of statistical claims)Midterm Study Guide
Terms to Know
5. Quantitative vs. Qualitative research: What do we do if we want the best of both worlds? What’s the difference? Quantitative Research: This type of research is charged with quantifying (measuring and counting) and subscribes to to a particular empirical approach to knowledge, believing that by measuring accurately enough we can make claims about the object at study.
2) Qualitative Research: This type of research is charged with the quality or qualities of an experience or phenomenon. Qualitative research rejects the notion of their being a simple relationship between our perception of the world and the world itself, instead arguing that each individual places different meaning on different events or experiences and that these are constantly changing. Qualitative research generally gathers text based data through exercises with a small number of participants, usually semi structured or unstructured interviews.
“Quantitative research is concerned with quantifying (measuring and counting) and subscribes to to a particular empirical approach to knowledge, believing that by measuring accurately enough we can make claims about the object at study. Due to the stringency and ‘objectivity’ of this form of research, quantitative research is often conducted in controlled settings, such as labs, to make sure that the data is as objective and unaffected by external conditions as possible. This helps with the replicability of the study, by conducting a study more than once and receiving the same or similar responses, you can be pretty sure your results are accurate. Quantitative research tends to be predictive in nature and is used to test research hypothesis, rather than descriptions of processes. Quantitative research tends to use a large number of participants, using experimental methods, or very structured psychometric questionnaires.
By contrast qualitative research is concerned with the quality or qualities of an experience or phenomenon. Qualitative research rejects the notion of their being a simple relationship between our perception of the world and the world itself, instead arguing that each individual places different meaning on different events or experiences and that these are constantly changing. Qualitative research generally gathers text based data through exercises with a small number of participants, usually semi structured or unstructured interviews.”
Dependent Variable: Independent Variable (X)
The independent variable is: the variable that the researcher hypothesizes will have an effect on the dependent variable
Usually manipulated (experimental study)
The independent variable is: manipulated by means of a program, treatment, or intervention done to only one group in the study (experimental group )
The control group gets the standard treatment or no treatment.
Dependent Variable (Y) The dependent variable is a factor, trait, or condition that can exist in differing amounts or types. Not manipulated and pressured to vary with changes in the independent variable The variable the researcher is interested in explaining.
Randomization. Each subject in the study has an equal chance of being assigned to the control group or the experimental group.
Assumes that any important intervening, extraneous, or mediating variable will be equally distributed between the groups, minimizing variance and decreasing selection bias.
Internal Validity Asks whether the independent variable really made the difference or the change in the dependent variable. Established by ruling out other factors or threats as rival explanations. Threats to internal validity
History: an event, other than the intervention, that might have an effect on the dependent variable; the event could be either inside or outside the experimental setting.
Testing: Taking the same test more than once can influence the participant’s responses the next time test is taken.
Threats to Internal Validity:
Mortality: the loss of study subjects
Selection bias : a partiality in choosing the participants in a study.
Objectivity in Conceptualization of the Research Questions. (week sixth)
Type of design chosen Accuracy/Feasibility/Control and intervention fidelity/Accuracy
Accomplished through the theoretical framework and literature review
All aspects of the study systematically and logically follow form the research questions.
Feasibility:
Validity: internal and external Objectivity can be achieved form a thorough review of the literature and the development of a theoretical framework.
Time: Is there enough time for completion of the study.
Subject availability: Are a sufficient number of eligibility subjects available?
Facility and equipment availability: are there necessary equipment and facilities available?
Expense: Is money available for the projected cost?
Experience: is the study based on the researcher’s experience and interest?
Ethics: could subject be harmed?
CONTROL
Terms to Know
2. Reasoning: A priori method (proposed by Charles Peirce): a person develops a belief by reasoning, listening to others’ reasoning, and drawing on previous intellectual knowledge – not based on experience or direct observation. So now that we have a firm grounding in some of the basic theories and theorists within psychology, and a basic understanding of the multiple conceptions of personality and how it develops, the logical next step is to explore how we come to these conclusions with regard to models of personality and other psychological concepts. In other words, how do we scientifically ascertain whether these theories hold water, and how we can most accurately quantify and categorize human behavior, while still attempting to allow for the grey area of individual differences?
Well, Psychological Research is defined as the scientific exploration, designed to describe, predict, and control human behavior, and to understand the hard facts of the human mind across diverse psychological and cross-cultural populations.
Ø first way is Authority, because someone told us that something is true. This could be any authority figure, like a professor, or someone in the media. Because someone told us it’s the truth, we often believe it to be so. Obviously, this has issues from a scientific perspective. When engaging in research we can’t rely on what others tell us alone, we need to have hard, replicable data to support it.
Ø The second method is Reasoning. The main method of reasoning is the A priori method (proposed by Charles Peirce): where a person develops a
8. Experience-Based errors in thinking:
Availability Heuristic - An availability heuristic is a mental shortcut that relies on immediate examples that come to mind.
As a result, you might judge that those events are more frequent and possible than others and tend to overestimate the probability and likelihood of similar things happening in the future.
Finally, the last common error in psychological research is the Availability Heuristic is a mental shortcut that relies on immediate examples that come to mind.
When you are trying to make a decision, a number of related events or situations might immediately spring to the forefront of your thoughts. As a result, you might judge that those events are more frequent and possible than others. You give greater credence to this information and tend to overestimate the probability and likelihood of similar things happening in the future.
20. Ethnography (Ethnographic Methods): Ethnographic methods: A type of social scientific method that gains insight into social relations through participant observation, interviews, and the analysis of art, texts, and oral histories. It is commonly used to analyze culture and is the most common method of anthropology.
Social Science Research Methods In the scientific community, and particularly in psychology and health, there has been an active and ongoing debate on the relative merits of adopting either quantitative or qualitative methods, especially when researching into human behavior
(Bowling, 2009; Oakley, 2000; Smith, 1995a, 1995b; Smith, 1998). In part, this debate formed a component of the development in the 1970s of our thinking about science. Andrew Pickering has described this movement as the “sociology of scientific knowledge” (SSK), where our scientific understanding, developing scientific ‘products’ and ‘know-how’, became identified as forming components in a wider engagement with society’s environmental and social context
(Pickering, 1992, pp. 1). Since that time, the debate has continued so that today there is an increasing acceptance of the use of qualitative methods in the social sciences
Grounded Theory - is frequently considered to offer researchers a suitable qualitative method for in-depth exploratory investigations. . It is a rigorous approach which provides the researcher with a set of systematic strategies and assumes that when investigating social processes qualitatively from the bottom up there will be an emergence of a working theory about the population from the qualitative data (Willig, 2008, pp. 44).
qualitative or quantitative components can predominate, or both can have equal status.
1. Types of Unscientific thinking :
(UN) SCIENTIFIC THINKING: Because someone told us that something is true.
Not adhering to the principles of science. Not knowledgeable about science or the scientific method.
belief by reasoning, listening to others’ reasoning, and drawing on previous intellectual knowledge – not based on experience or direct observation. While this might work when developing opinions in day-to-day life, we can’t say that this gives us hard facts that are generalizable to any population. Opinions derived from this method, however can be said to create a good starting point for research.
Not adhering to the principles of science.
Not knowledgeable about science or the scientific method.
not scientific; not employed in science.
not conforming to the principles or methods of science.
not demonstrating scientific knowledge or scientific methods.
Not consistent with the methods or principles of science; "an unscientific lack of objectivity"Unreliable self-report data
Unsubstantiated observations
Post-hoc, unsystematic summaries
Speculation and over generalization
2. Experience-Based errors in thinking: Experimental methods: The most powerful method used in the social sciences, albeit the most difficult to use. It manipulates individuals in a particular way (the treatment) and explores the impact of this treat- ment. It offers powerful insight by controlling the environment, thereby allowing researchers to isolate the impact of the treatment. Experience-Based errors in thinking: Availability Heuristic - An availability heuristic is a mental shortcut that relies on immediate examples that come to mind.
As a result, you might judge that those events are more frequent and possible than others and tend to overestimate the probability and likelihood of similar things happening in the future.
Finally, the last common error in psychological research is the Availability Heuristic is a mental shortcut that relies on immediate examples that come to mind.
When you are trying to make a decision, a number of related events or situations might immediately spring to the forefront of your thoughts. As a result, you might judge that those events are more frequent and possible than others. You give greater credence to this information and tend to overestimate the probability and likelihood of similar things happening in the future.
The term was first coined in 1973 by psychologists Amos Tversky and Daniel Kahneman. They suggested that the availability heuristic occurs unconsciously and operates under the principle that "if you can think of it, it must be important." Things that come to mind more easily are believed to be far more common and more accurate reflections of the real world.
3. The scientific method: A way of knowing characterized by the attempt to apply systematic, objective, empirical methods when searching for causes of natural events.
Probabilistic Statistical determinism: Based on what we have observed, is the likelihood of two events occurring together (whether causal, predictive, or simple relational) greater than chance?
Objectivity: without bias of the experimenter or participants.
Data-driven: conclusions are based on the data-- objective information.
Well, we have what is known as the scientific method:
The Scientific Method Is a way of knowing characterized by the attempt to apply systematic, objective, empirical methods when searching for causes of natural events.
It utilizes probabilistic statistical determinism, asking the data given what we have observed, is the likelihood of two events occurring together (whether causal, predictive, or simple relational) greater than chance?
It utilizes, or attempts to utilize Objectivity: producing or participating in research without bias of the experimenter or participants.
And finally, and most importantly, its Data-driven: conclusions are based on the data-- objective information, and mathematical facts.
Steps of The Scientific Method
Ask a Question
Do Background Research
Construct a Hypothesis
Test Your Hypothesis by Doing an Experiment
Analyze Your Data and Draw a Conclusion
Communicate Your Results
Scientific Thinking in Research
CRITERIA FOR SCIENTIFIC METHOD:
Empirical: All information is based on observation.
Objectivity: Observations are verified by others.
Systematic: Observations are made in a step-by-step fashion.
Controlled: Potentially confusing factors are eliminated.
Public: Built on previous research, open to critique and replication, building towards theories
Lawful: Every event can be understood as a sequence of natural causes and effects.
Determinism: Events and behaviors have causes.
Discoverability: Through systematic observation, we can discover causes – and work towards more certain and comprehensive explanations through repeated discoveries.
Now all of this couldn’t work if we didn’t make some assumptions about behavior when engaging in scientific research.
We assume, as experimental psychologists that:
Lawful: Every event can be understood as a sequence of natural causes and effects.
As psychological researchers we believe in Determinism: That events and behaviors have causes. And we also believe in Discoverability: Through systematic observation, we can discover causes – and work towards more certain and comprehensive explanations through repeated discoveries.
Research Question
Deduction: Reasoning from a set of general statements towards the prediction of some specific event. Based on a theory, one can deduct an event or behavior given particular conditions.
Hypothesis: Prediction about specific events that is derived from the theory.
Induction: Logical process of reasoning from specific events to the theory (either confirming or disproving the theory).
From theory to actual research So what is the Relationship between theory and data you might ask?
Well the first relationship is Deduction: Reasoning from a set of general statements towards the prediction of some specific event. Based on a theory, one can deduct an event or behavior given particular conditions.
The second is that we can form a Hypothesis: Prediction about specific events that is derived from the theory.
And finally, the last one is Induction: Logical process of reasoning from specific events to the theory (either confirming or disproving the theory).
The various methodologies of social science research, but before we do so we will review a bit some of the core aspects of the research process.
So, in common with other sciences, psychology and sociology are both concerned with theories and with data.
A theory provides a general explanation or account of certain findings or data. It also generates a number of experimental hypotheses, which are predictions or expectations about behavior based on the theory. For example, someone might propose a theory in which it is
argued that some people are more hostile than others. This theory could be used to produce various hypotheses or predictions, such as the following: hostile people will express anger more often than non-hostile ones; hostile people will react more strongly than non-hostile
ones to frustrating situations; hostile people will be more sarcastic than non-hostile people.
Psychologists spend a lot of their time collecting data in the form of measures of behavior. Data are collected in order to test various hypotheses. Most people assume that this data collection involves proper or true experiments carried out under laboratory conditions,and it is true that literally millions of laboratory experiments have been carried out in psychology. However, psychologists make use of several methods of investigation,each of which has provided useful information about human behavior.
As you read through the various methods of investigation in your textbook and listen to them in our lectures, it is natural to wonder which methods are the best and the worst as was the topic in our last discussion question. In some ways, it may be more useful to compare the methods used by psychologists to the clubs used by the golf professional. The driver is not a better or worse club than the putter, it is simply used for a different purpose.
In similar fashion, each method of investigation used by psychologists is very useful for testing some hypotheses, but is of little or no use for testing other hypotheses.
However, as we will discuss further the experimental method provides the best way of being able to make inferences about cause and effect.
4. Assumptions about behavior in research: Assumptions about behavior in research:ASSUMPTIONS ABOUT BEHAVIORS OR OBSERVATIONS: Social Science Research Methods In many studies, use is made of pre-existing groups of people. For example, we might compare the performance of males and females, or that of young and middle-aged individuals.
Do such studies qualify as genuine experiments? The answer is “No”. Use of the experimental method requires that the independent variable is manipulated by the experimenter, but clearly the experimenter cannot decide whether a given person is going to be male or female for the purposes of the study! What is generally regarded as the greatest advantage of the experimental method is that it allows us to establish cause and effect relationships. In the terms we have been using, the independent variable in an experiment is often regarded as a cause, and the dependent variable is the effect. Philosophers of science have argued about whether or not causality can be established by experimentation. However, the general opinion is that causality can only be inferred. If y (e.g. poor performance) follows x (e.g. intense noise), then it is reasonable to infer that x caused y.
5. Theory: Definition of a theory: A set of logically consistent statements about some psychological phenomenon that best summarizes existing empirical knowledge of the phenomenon organizes this knowledge in the form of precise statements of the relationship among variables provides a tentative explanation for the phenomenon serves as a basis for making predictions about behavior. So we have our method, we have our assumptions, so how do we actually do research? Relationship between theory and data
6. Hypothesis: Prediction about specific events that is derived from the theory. Induction: Logical process of reasoning from specific events to the theory (either confirming or disproving the theory).
Definition: A way of knowing characterized by the attempt to apply systematic, objective, empirical methods when searching for causes of natural events. Probabilistic Statistical determinism: Based on what we have observed, is the likelihood of two events occurring together (whether causal, predictive, or simple relational) greater than chance? Objectivity: without bias of the experimenter or participants. Data-driven: conclusions are based on the data-- objective information. Data-driven: conclusions are based on the data-- objective information.
7
8. Relationship between theory and data
9. Method: A technique used to analyze data. Commonly, a method is aligned with a particular strategy for gathering data, as particular methods commonly require particular types of data. “Method” is therefore commonly used to refer to strategies for both analyzing and gathering data.
10. Methodology: A body of practices, procedures, and rules used by researchers to offer insight into the workings of the world.
11. Insight: Evidence contributing to an understanding of a case or set of cases. Comparative-historical researchers are generally most concerned with causal insight, or insight into causal processes
12. Comparative Historical Analysis (Know different types): Comparative Historical Analysis (Know different types): Comparative methods: Diverse methods used in the social sciences that offer insight through cross-case comparison. For this, they com- pare the characteristics of different cases and highlight similarities and
historical analysis in recognition of the tradi- tion’s growing multidisciplinary character. In addition to sociology, comparative-historical analysis is quite prominent in political science and is present—albeit much more marginally—in history, economics, and anthropology. differences between them. Comparative methods are usually used to explore causes that are common among a set of cases. They are commonly used in all social scientific disciplines.
4 types of comparative-historical research
Comparative and Historical Research by number of cases and length of time studied Historical Methods
13. Epistemology: A branch of philosophy that considers the possibility of knowledge and understanding. Within the social sciences, epistemological debates commonly focus on the possibility of gaining insight into the causes of social phenomena
14. Variable: Something that the researcher/experimenter can measure.
15. Positivism: An epistemological approach that was popular among most of the founding figures of the social sciences. It claims that the scientific method is the best way to gain insight into our world. Within the social sciences, positivism suggests that scientific methods can be used to analyze social relations in order to gain knowledge. At its extreme, positivism suggests that the analysis of social relations through scientific methods allows researchers to discover laws that govern all social relations. Positivism is therefore linked to nomothetic explanations. Other positivists believe social complexity prevents the discovery of social laws, but they still believe that the scientific method allows researchers to gain insight into the determinants of social phenomena.
16. Ethnography (Ethnographic Methods) A type of social scientific method that gains insight into social relations through participant observation, interviews, and the analysis of art, texts, and oral histories. It is commonly used to analyze culture and is the most common method of anthropology.
17. Case Study (definition, when it is used, different types) Case Study (Within-case methods): A category of methods used in the social sciences that offer insight into the determinants of a particular phenomenon for a particular case. For this, they analyze the processes and characteristics of the case.
18. Meta analysis: A study of multiple Case Studies
1. The logic for such a cross-case synthesis emulates that used in addressing whether the findings from a set of multiple experiments—too small in number to be made part of any quantitative meta-analysis (a study of the results of other studies)—support any broader pattern of conclusions.
2. The replication or corroboratory frameworks can vary. In a direct replication, the single cases would be predicted to arrive at similar results. Discussed earlier was the desire to apply a replication logic in interpreting the findings across the cases in a multiple-case study. The logic for such a cross-case synthesis emulates that used in addressing whether the findings from a set of multiple experiments—too small in number to be made part of any quantitative meta-analysis—support any broader pattern of conclusions. The replication or corroboratory frameworks can vary. In a direct replication, the single cases would be predicted to arrive at similar results. In a theoretical replication, each single case’s ultimate disposition also would have been predicted beforehand, but each case might have been predicted to produce a varying or even contrasting result, based on the preconceived propositions. Even more complex could be the stipulation and emergence of a typology of cases based on a multiple-case study.
19. Ideographic Explanation: Ideographic explanation: Causal explanations that explore the causes of a particular case. Such explanations are not meant to apply to a larger set of cases and commonly focus on the particularities of the case under analysis
23. Validity: internal and external Objectivity can be achieved form a thorough review of the literature and the development of a theoretical framework. The literature review should be presented so that the reader can judge the objectivity of the research questions. Purpose of Research Design Provides the plan or blueprint for testing research questions and hypotheses. Involves structure and strategy to maintain control and intervention fidelity.
26. Variable: Something that the researcher/experimenter can measure.
27. Independent Variable: The variable the experimenter has control over, can change in some way to see if it has an effect on other variables in the study.
28. Dependent Variable: The variable that is measured to see if a place: change takes.
29. Control Variable: The variable that is not manipulated that serves as a comparison group from the other variables in the study. This third variable is used to ensure that the independent variable, when manipulated is actually having an effect on the dependent variable. For example, if a similar change occurs in the control variable as the dependent variable, this indicates that the change may not be the result of the independent variable manipulation and may be a natural change in the variable. In a experiment the researcher manipulates the independent variable to see if it has an effect on the dependent variable.
i. Historical Events Research –focuses on one short historical period (1 case, 1 time period)
ii. Historical Process Research –traces a sequence of events over a number of years (1 case, many time periods)
iii. Cross-sectional Comparative Research -- comparing data from one time period between two or more nations (many cases, 1 time period)
iv. Comparative Historical Research – longitudinal comparative research (many cases) over a prolonged period of time
30. Cross-sectional vs. longitudinal research:
Control: Achieved with steps taken by the searcher to hold the conditions of the study uniform and avoid or decrease the effect of intervening, extraneous, or mediating variables of the dependent variable or outcome.
Intervention Fidelity
Intervention fidelity: Ensures that every subject receiving the intervention of treatment receive the identical intervention or treatment.
Intervening, Extraneous, Or Mediating Variables.
CRITERIA FOR SCIENTIFIC METHOD:
Empirical: All information is based on observation.
Objectivity: Observations are verified by others.
Systematic: Observations are made in a step-by-step fashion.
Controlled: Potentially confusing factors are eliminated.
Basic vs. Applied Research : Goal of describing, predicting, & explaining fundamental principles of behavior vs. solving real-life problems
2. Laboratory Research versus Field Research Research in controlled laboratories vs. uncontrolled or real-life contexts is actually more crucial. The use of statistics to analyze the data is, however, the element that puts a lot of people off doing quantitative research, as the mathematics underlying the methods seems complicated and frightening. As we will see later on in this book, most researchers do not really have to be particularly expert in the mathematics underlying the methods, as computer software allows us to do the analyses quickly and (relatively) easily.
Variable : Independent Variable (X) The independent variable is: the variable that the researcher hypothesiszes will have an effect on the dependent variable
Usually manipulated (experimental study)
The independent variable is: manipulated by means of a program, treatment, or intervention done to only one group in the study (experimental group )
The control group gets the standard treatment or no treatment.
Dependent Variable: Independent Variable (X)
16) Can we generalize at all from a Case Study?
To the extent that any study concerns itself with generalizing, case studies tend to generalize to other situations (on the basis of analytic claims), whereas surveys and other quantitative methods tend to generalize to populations (on the basis of statistical claims).
determine whether you can make any generalizations from your case study. One available procedure applies well to all kinds of case studies, including the holistic, single-case study that has been commonly criticized for having little or no generalizability
EXPERIMENTAL METHOD
The method of investigation used most often by psychologists is the experimental method. In order to understand what is involved in the experimental method, we will consider a concrete example.
Dependent and independent variables
Suppose that a psychologist wants to test the experimental hypothesis that loud noise will have a disruptive effect on the performance of a task. As with most hypotheses, this
known types of errors in experience-based conclusions and in psychological research in general.
Experience based errors in thinking Illusory Correlation
Definition: thinking that one has observed an association between events that
(a) doesn’t exist,
(b) exists but is not as strong as is believed,
or (c) is in the opposite direction from what is believed.
The first one is an Illusory Correlation, or thinking that one has observed an association between events that either:
Experience based errors in thinking: Confirmation Bias –
In psychology and cognitive science, confirmation bias (or confirmatory bias) is a tendency to search for or interpret information in a way that confirms one's preconceptions, leading to statistical errors.
The second error in thinking is Confirmation Bias – Comparative methods: Diverse methods used in the social sciences that offer insight through cross-case comparison. For this, they com- pare the characteristics of different cases and highlight similarities and differences between them. Comparative methods are usually used to explore causes that are common among a set of cases. They are commonly used in all social scientific disciplines.
Epistemology: A branch of philosophy that considers the possibility of knowledge and understanding. Within the social sciences, epistemological debates commonly focus on the possibility of gaining insight into the causes of social phenomena.
Experimental methods: The most powerful method used in the social sciences, albeit the most difficult to use. It manipulates individuals in a particular way (the treatment) and explores the impact of this treat- ment. It offers powerful insight by controlling the environment, thereby allowing researchers to isolate the impact of the treatment.
Case Study (Within-case methods): A category of methods used in the social sciences that offer insight into the determinants of a particular phenomenon for a particular case. For this, they analyze the processes and characteristics of the case.
Ideographic explanation: Causal explanations that explore the causes of a particular case. Such explanations are not meant to apply to a larger set of cases and commonly focus on the particularities of the case under analysis.
Insight: Evidence contributing to an understanding of a case or set of cases. Comparative-historical researchers are generally most concerned with causal insight, or insight into causal processes.
Statistical methods: The most common subtype of comparative methods. It operationalizes variables for several cases, compares the cases to explore relationships between the variables, and uses probability theory to estimate causal effects or risks. Within the social sciences, statistics uses natural variation to approximate experimental methods. There are diverse subtypes of statistical methods.
Variable: Something that the researcher/experimenter can measure.
Independent Variable: The variable the experimenter has control over, can change in some way to see if it has an effect on other variables in the study.
Dependent Variable: The variable that is measured to see if a change takes place:
Control Variable: The variable that is not manipulated that serves as a comparison group from the other variables in the study. This third variable is used to ensure that the independent variable, when manipulated is actually having an effect on the dependent variable. For example, if a similar change occurs in the control variable as the dependent variable, this indicates that the change may not be the result of the independent variable manipulation and may be a natural change in the variable.
In a experiment the researcher manipulates the independent variable to see if it has an effect on the dependent variable.
Empirical: All information is based on observation.
Objectivity: Observations are verified by others.
Systematic: Observations are made in a step-by-step fashion.
Controlled: Potentially confusing factors are eliminated.
Public: Built on previous research, open to critique and replication, building towards theories
Hypothesis: Prediction about specific events that is derived from the theory.
Induction: Logical process of reasoning from specific events to the theory (either confirming or disproving the theory).
Relationship between theory and data
Deduction: Reasoning from a set of general statements towards the prediction of some specific event. Based on a theory, one can deduct an event or behavior given particular conditions.
Ø Scientific Method: Definition: A way of knowing characterized by the attempt to apply systematic, objective, empirical methods when searching for causes of natural events.
Probabilistic Statistical determinism: Based on what we have observed, is the likelihood of two events occurring together (whether causal, predictive, or simple relational) greater than chance?
Objectivity: without bias of the experimenter or participants.
Data-driven: conclusions are based on the data-- objective information.
Well, we have what is known as the scientific method:
Steps of The Scientific Method
Ø Ask a Question
Ø Do Background Research
Ø Construct a Hypothesis
Ø Test Your Hypothesis by Doing an Experiment
Ø Analyze Your Data and Draw a Conclusion
Ø Communicate Your Results
Scientific Thinking in Research
CRITERIA FOR SCIENTIFIC METHOD:
Empirical: All information is based on observation.
Objectivity: Observations are verified by others.
Systematic: Observations are made in a step-by-step fashion.
Controlled: Potentially confusing factors are eliminated.
Public: Built on previous research, open to critique and replication, building towards theories
Lawful: Every event can be understood as a sequence of natural causes and effects.
Determinism: Events and behaviors have causes.
Discoverability: Through systematic observation, we can discover causes – and work towards more certain and comprehensive explanations through repeated discoveries.
External Validity - The extent to which a study's results (regardless of whether the study is descriptive or experimental) can be generalized/applied to other people or settings reflects its external validity. Typically, group research employing randomization will initially possess higher external validity than will studies (e.g., case studies and single-subject experimental research) that do not use random selection/assignment. Internal Validity
External Validity Reliability and Validity and Field Vs. Laboratory Research Field Research: High internal validity, low external validity, Low reliability.
Laboratory Research: High external validity, low internal validity, High reliability.
Social Science Research Methods
Comparative- Historical methods
Since the rise of the social sciences, researchers have used comparative- historical methods to expand insight into diverse social phenomena and, in so doing, have made great contributions to our understanding of the social world.
Comparative-Historical Analysis
Mahoney and Rueschemeyer (2003) refer to it as comparative-historical analysis in recognition of the tradition’s growing multidisciplinary character. In addition to sociology, comparative-historical analysis is quite prominent in political science and is present—albeit much more marginally—in history, economics, and anthropology.
4 types of comparative-historical research
• Historical Events Research –focuses on one short historical period (1 case, 1 time period)
• Historical Process Research –traces a sequence of events over a number of years (1 case, many time periods)
• Cross-sectional Comparative Research comparing data from one time period between two or more nations (many cases, 1 time period)
• Comparative Historical Research – longitudinal comparative research (many cases)
Comparative and Historical Research by number of cases and length of time studied
Comparative and Historical Research by number of cases and length of time studied
How do we understand Comparative Historical Research?
comparative-historical analysis has four main defining elements. Two are methodological, as works within the research tradition employ both within-case methods and comparative methods. Comparative-historical analysis is also defined by epistemology. Specifically, comparative-historical works pursue social scientific insight and therefore accept the possibility of gaining insight through comparative-historical
Secondary Sources – Collecting data from others who have already collected the data such as news papers, magazines, and interviews.These sources of data are prone to the bias of the source, therefore the data may be somewhat inaccurate.
• Narrative. It researches a story involving specific actors and other events occurring at the same time (Abbott, 1994:102), or one that takes account of the position of actors and events in time and in a unique historical context (Griffin, 1992).
• Inductive. The research develops an explanation for what happened from the details discovered about the past.
Historical Events Research & Event-Structure Analysis
It often utilizes a process known as Historical Events Research.
Historical events research is research on past events that does not follow processes for some long period of time—that is basically cros-ssectional—is historical events research rather than historical process research.
Event Structure Analysis is a qualitative approach that relies on a systematic coding of key events or national characteristics to identify the underlying structure of action in a chronology of events.
Case Study Method: “An empirical inquiry about a contemporary phenomenon (e.g., a “case”), set within its real-world context—especially when the boundaries between phenomenon and context are not clearly evident (Yin, 2009a, p. 18; SagePub, 2014)’
The case study method embraces the full set of procedures needed to do case study research. These tasks include designing a case study, collecting the study’s data, analyzing the data, and presenting and reporting the results All case study research starts from the same compelling feature: the desire to derive an up-close or otherwise in-depth understanding of a single or small number of “cases,” set in their real-world contexts (e.g., Bromley, 1986, p. 1). The clos
eness aims to produce an invaluable and deep understanding—that is, an insightful appreciation of the “case(s)”—hopefully resulting in new learning about real-world behavior and its meaning. The distinctiveness of the case study, therefore, also serves as its abbreviated definition:
3) Assumptions: “Among other features, case study research assumes that examining the context and other complex conditions related to the case(s) being studied are integral to understanding the case(s) (SagePub, 2014).”
Thus, among other features, case study research assumes that examining the context and other complex conditions related to the case(s) being studied are integral to understanding the case(s). The in-depth focus on the case(s), as well as the desire to cover a broader range of contextual and other complex conditions, produce a wide range of topics to be covered by any given case study. In this sense, case study research goes beyond the study of isolated variables. As a by-product, and as a final feature in appreciating case study research, the relevant case study data are likely to come from multiple and not singular sources of evidence.
4) When to use the Case Study Method: “First and most important, the choices among different research methods, including the case study method, can be determined by the kind of research question that a study is trying to address (e.g., Shavelson & Towne, 2002, pp. 99–106, SagePub, 2014).”
“Second, by emphasizing the study of a phenomenon within its real-world context, the case study method favors the collection of data in natural settings, compared with relying on “derived” data (Bromley, 1986, p. 23, SagePub, 2014)”
Third, the case study method is now commonly used in conducting evaluations (SagePub, 2014).
11) Presenting your Case: You need to present the evidence in your case study with sufficient clarity (e.g., in separate texts, tables, and exhibits) to allow readers to judge independently your later interpretation of the data.
Ideally, such evidence will come from a formal case study database that you compile for your files after completing your data collection.
Properly dealing with case study evidence requires a final but essential practice:
You need to present the evidence in your case study with sufficient clarity (e.g., in separate texts, tables, and exhibits) to allow readers to judge independently your later interpretation of the data. Ideally, such evidence will come from a formal case study database that you compile for your files after completing your data collection.
Unfortunately, older case studies frequently mixed evidence and interpretation.
This practice may still be excusable when doing a unique case study or a revelatory
case study, because the insights may be more important than knowing the strength of the evidence for such insights. However, for most case studies, mixing evidence and interpretation may be taken as a sign that you do not understand the difference between the two or that you do not know how to handle data (and hence proceeded prematurely to interpretation)
13) Techniques
Pattern-Matching
Open-Ended Questions
Time-Series-Like Analysis
If selecting your case(s) to be studied is the most critical step in doing case study research, analyzing your case study data is probably the most troublesome.
Much of the problem relates to false expectations: that the data will somehow “speak for themselves,” or that some counting or tallying procedure will be sufficient  to produce the main findings for a case study. Wrong. Instead, consider the following alternatives. You actually made some key assumptions for your analysis when you defined your research questions and your case. Was your motive in doing  the case study mainly to address your research questions? If so, then the techniques for analyzing the data might be directed at those questions first. Was your motive to derive more general lessons for which your case(s) are but examples? If so, your analysis might be directed at these
21. Case Study (definition, when it is used, different types):
Case Study Data Analysis Whether using computer software to help you or not, the researcher will be the one who must define the codes to be used and the procedures for logically piecing together the coded evidence into broader themes—in essence creating your own unique algorithm befitting your particular case study. The strength of the analytic course will depend on a marshaling of claims that use your data in a logical fashion.
Your analysis can begin by systematically organizing your data (narratives and words) into hierarchical relationships, matrices, or other arrays (e.g., Miles & Huberman, 1994).
Case study analysis takes many forms, but none yet follow the routine procedures that may exist with other research methods. The absence of any cookbook for analyzing case study evidence has been only partially offset by the development of  prepackaged computer software programs. They can support the analysis of large amounts of narrative text by following your instructions in coding and categorizing your notes or your verbatim transcripts. However, unlike software for analyzing numeric data, whereby an analyst provides the input data and the computer uses an algorithm to estimate some model and proceeds to produce the output data, there is no automated algorithm when analyzing narrative data
24. Basic vs. Applied Research :
Basic versus Applied Research
Goal of describing, predicting, & explaining fundamental principles
of behavior vs. solving real-life problems
2. Laboratory Research versus Field Research
Research in controlled laboratories vs. uncontrolled or real-life contexts
3. Quantitative versus Qualitative Research
Descriptive & inferential statistics vs. narrative analysis (e.g., case studies, observational research, interviews)
27. Quantitative vs. Qualitative research: Quantitative Research: An Overview
Mathematically based Often uses survey-based measures to collect data
Often collects data on what is known as a “Likert-scale” a 4-7 point numerical scale  which a participant rates agreement Uses statistical methodology to analyze numerical data As quantitative research is essentially about collecting numerical
data to explain a particular phenomenon, particular questions seem immediately
suited to being answered using quantitative methods
The number of phenomena we can study in this way is almost unlimited, making quantitative research quite flexible. This is not to say that all phenomena are best studied by quantitative methods. As we will see, while quantitative methods have some notable advantages, they also have
disadvantages, which means that some phenomena are better studied by
using different (qualitative) methods.
The last part of the definition refers to the use of mathematically based
methods, in particular statistics, to analyze the data. This is what people
usually think about when they think of quantitative research,
and is often seen as the most important part of quantitative studies. This is a bit of a misconception, as, while using the right data analysis tools obviously matters
a great deal, using the right research design and data collection instruments
is actually more crucial. The use of statistics to analyze the data is, however, the element that puts a lot of people off doing quantitative research, as the mathematics underlying the methods seems complicated and frightening. As we will see later on in this book, most researchers do not really have to be particularly expert in the
6) When to use Quantitative Research
1) When we are looking for a numerical answer
2) When we want to study numerical change
3) When we want to find out about the state of something or to explain a phenomena
4) When we want to test a hypothesis
If we take a pragmatic approach to research methods, the main question
that we need to answer is ‘what kind of questions are best answered by
using quantitative as opposed to qualitative methods?’
There are four main types of research questions that quantitative research
is particularly suited to finding an answer to:
1. The first type of research question is that demanding a quantitative
answer. Examples are: ‘How many students choose to study education?’
or ‘How many math teachers do we need and how many have we got in
our school district?’ That we need to use quantitative research to answer
this kind of question is obvious. Qualitative, non-numerical methods
will obviously not provide us with the (numerical) answer we want.
2. Numerical change can likewise accurately be studied only by using quantitative
methods. Are the numbers of students in our university rising or
falling? Is achievement going up or down? We’ll need to do a quantitative
study to find out.
3. As well as wanting to find out about the state of something or other, we
often want to explain phenomena. What factors predict the recruitment
of math teachers? What factors are related to changes in student
achievement over time? As we will see later on in this book, this kind of
question can also be studied successfully by quantitative methods, and
many statistical techniques have been developed that allow us to predict
scores on one factor, or variable (e.g. teacher recruitment) from scores on
one or more other factors, or variables (e.g. unemployment rates, pay,
conditions).
4. The final activity for which quantitative research is especially suited is
the testing of hypotheses. We might want to explain something – for
example, whether there is a relationship between pupil’s achievement
and their self-esteem and social background. We could look at the theory
and come up with the hypothesis that lower social class background
leads to low self-esteem, which would in turn be related to low achievement.
Using quantitative research, we can try to test this kind of model.
Problems one and two above are called ‘descriptive’. We are merely trying
to describe a situation. Three and four are ‘inferential’. We are trying to
explain something rather than just describe it.
7) Advantages and Disadvantages: Quantitative
Quantitative Advantages:
Concise
Accurate
Strictly Controlled
Replicable
Can indicate causation
Ideally is objective
Quantitative Disadvantages:
Limited understanding of individuality
Groups people into categories
Can be accused of oversimplifying human nature
When we think about the advantages of quantitative research, the first thing we will acknowledge is that it is the dominant approach in psychological research. Its concise, accurate and can be strictly controlled to ensure that the results are replicable and that causation is established. Quantitative data also has predictive power in that research can be generalized to a different setting. It can also be a lot faster and easier to analyze qualitative data.
While Quantitative research , there are other types of questions that are not
well suited to quantitative methods.
1. The first situation where quantitative research will fail is when we want
to explore a problem in depth. Quantitative research is good at providing
information in breadth, from a large number of units, but when we
want to explore a problem or concept in depth, quantitative methods
can be too shallow. To really get under the skin of a phenomenon, we
will need to go for ethnographic methods, interviews, in-depth case
studies and other qualitative techniques.
2. We saw above that quantitative research is well suited for the testing of
theories and hypotheses. What quantitative methods cannot do very
well is develop hypotheses and theories. The hypotheses to be tested may
come from a review of the literature or theory, but can also be developed
by using exploratory qualitative research.
3. If the issues to be studied are particularly complex, an in-depth qualitative
study (a case study, for example) is more likely to pick up on this
than a quantitative study. This is partly because there is a limit to how
many variables can be looked at in any one quantitative study, and partly
because in quantitative research the researcher defines the variables to be
studied herself, while in qualitative research unexpected variables may
emerge.
4. Finally, while quantitative methods are best for looking at cause and
effect (causality, as it is known), qualitative methods are more suited to
looking at the meaning of particular events or circumstances.
Focus on “language rather than numbers”
“Embraces “intersubjectivity” or how people may construct meaning…”
Focus on the individual and their real lived experience
Qualitative methods have much to offer when we need to explore people’s feelings or ask
participants to reflect on their experiences. As was noted above, some of the earliest
psychological thinkers of the late 19th century and early 20th century may be regarded as
proto-qualitative researchers. Examples include the ‘founding father’ of psycho-analysis,
Sigmund Freud, who worked in Vienna (late 19th century – to mid 20th century), recorded
and published numerous case-studies and then engaged in analysis, postulation and
theorizing on the basis of his observations, and the pioneering Swiss developmental
psychologist, Jean Piaget (1896 – 1980) who meticulously observed and recorded his
children’s developing awareness and engagement with their social world. They were
9) Advantages and Disadvantages: Qualitative
Qualitative Advantages:
Appreciates research participant’s individuality
Provides insider view of research question
Less structured than quantitative approach
Qualitative Disadvantages:
Not always appropriate to generalize results to larger population
10) Qualitative Research in Psychology
Today, a growing number of psychologists are re-examining and re-exploring qualitative
methods for psychological research, challenging the more traditional ‘scientific’
experimental approach (see, for example, Gergen, 1991; 1985; Smith et al., 1995a, 1995b).
There is a move towards a consideration of what these other methods can offer to
psychology ( Bruner, 1986; Smith et al.,1995a
Content and Thematic Analysis
Grounded Theory (Generating Theory from Data)
Discourse and Narrative Analysis
12) When to use Qualitative Research Content and Thematic Analysis - Content Analysis, or Thematic Analysis (the terms are frequently used interchangeably and generally mean much the same), is particularly useful for conceptual, or thematic, analysis or relational analysis. It can quantify the occurrences of concepts selected for examination (Wilkinson & Birmingham, 2003).
Randomization Each subject in the study has an equal chance of being assigned to the control group or the experimental group.
Assumes that any important intervening, extraneous, or mediating variable will be equally distributed between the groups, minimizing variance and decreasing selection bias.
Internal Validity
Asks whether the independent variable really made the difference or the change in the dependent variable.
Established by ruling out other factors or threats as rival explanations.
Threats to internal validity
History: an event, other than the intervention, that might have an effect on the dependent
Experience: is the study based on the researcher’s experience and interest?
Ethics: could subject be harmed?
CONTROL
Control: Achieved with steps taken by the searcher to hold the conditions of the study uniform and avoid or decrease the effect of intervening, extraneous, or mediating variables of the dependent variable or outcome.
Intervention Fidelity
Intervention fidelity: Ensures that every subject receiving the intervention of treatment receive the identical intervention or treatment.
Intervening, Extraneous, Or Mediating Variables.
Variables that occur during the study that interfere with or influence the relationship between the independent and dependent variables.
Intervening and mediating variables are processes that occur during the study.
Extraneous variables are subject, researcher, or environmental characteristics that might influence the dependent variable. “muck up the study!!!!”
Controlling Extraneous Variables
Using a homogeneous sample
Using consistent data-collection procedures- constancy.
8. Method : Method: A technique used to analyze data. Commonly, a method is aligned with a particular strategy for gathering data, as particular methods commonly require particular types of data. “Method” is therefore commonly used to refer to strategies for both analyzing and gathering data.
Methodology: A body of practices, procedures, and rules used by researchers to offer insight into the workings of the world.
historical analysis in recognition of the tradi- tion’s growing multidisciplinary character. In addition to sociology, comparative-historical analysis is quite prominent in political science and is present—albeit much more marginally—in history, economics, and anthropology.
4 types of comparative-historical research;
Historical Events Research –focuses on one short historical period (1 case, 1 time period)
Historical Process Research –traces a sequence of events over a number of years (1 case, many time periods) Cross-sectional Comparative Research -- comparing data from one time period between two or more nations (many cases, 1 time period) Comparative Historical Research – longitudinal comparative research (many cases) over a prolonged period of time
Comparative and Historical Research by number of cases and length of time studied How do we understand Comparative Historical Research?
Historical Methods: Historical methods, also known as historiography, are the most common analytic  techniques used in the discipline of history. They are generally used to explore either what happened at a particular time and place or what the characteristics of a phenomenon were like at a particular time and place.
Similar to statistical and experimental methods, comparative-historical methods employ comparison as a means of gaining insight into causal determinants. Similar to ethnographic and historical methods, comparative-historical methods explore the characteristics and causes of particular phenomena.
Comparative-historical analysis, however, does not simply combine the methods from other major methodological traditions—none of the major comparative methods is very common in comparative-historical analysis.
As a consequence, comparative-historical researchers commonly avoid statistics and simply focus on causal processes. Additional reasons for the limited use of statistical comparison within the comparative-historical research tradition include the limited availability of historical data needed for appropriate statistical analyses and the small number of cases analyzed by comparative-historical researchers.
Comparative Historical “toolkit” 
Besides comparative methods, comparative-historical scholars employ several different types of within-case methods: Ethnography Historical Methods Idiographic Methods Nomothetic Explanations So what does this tool-kit look like?
Well, comparative historical research can be: 
Holistic. It is concerned with the context in which events occurred and the interrelations among different events and processes: “how different conditions or parts fit together” (Ragin, 1987:25–26).
Conjunctural. This is because, it is argued, “no cause ever acts except in complex conjunctions with others”(Abbot, 1994:101). Temporal. It becomes temporal by taking into account the related series of events that unfold over time.
So what does this tool-kit look like?
Historically specific. It is likely to be limited to the specific time(s) and place(s) studied, like traditional historical research.
14. Ethnography (Ethnographic Methods): A type of social scientific method that gains insight into social relations through participant observation, interviews, and the analysis of art, texts, and oral histories. It is commonly used to analyze culture and is the most common method of anthropology.
18. Basic vs. Applied Research : Basic versus Applied Research Goal of describing, predicting, & explaining fundamental principles of behavior vs. solving real-life problems
21. Quantitative vs. Qualitative research: Quantitative Research: An Overview
Mathematically based Often uses survey-based measures to collect data
Often collects data on what is known as a “Likert-scale” a 4-7 point numerical scale which a participant rates agreement Uses statistical methodology to analyze numerical data As quantitative research is essentially about collecting numerical
data to explain a particular phenomenon, particular questions seem immediately
suited to being answered using quantitative methods. How many males get a first-class degree at university compared to females? What percentage of teachers and school leaders belong to ethnic minority groups?
Has pupil achievement in English improved in our school district over
time? These are all questions we can look at quantitatively, as the data we
need to collect are already available to us in numerical form. Does this not
severely limit the usefulness of quantitative research though? There are
many phenomena we might want to look at, but which don’t seem to produce
any quantitative data. In fact, relatively few phenomena in education
actually occur in the form of ‘naturally’ quantitative data.
Luckily, we are far less limited than might appear from the above. Many
data that do not naturally appear in quantitative form can be collected in
a quantitative way. We do this by designing research instruments aimed
specifically at converting phenomena that don’t naturally exist in quantitative
participants, using experimental methods, or very structured psychometric questionnaires.
4. Finally, while quantitative methods are best for looking at cause and
effect (causality, as it is known), qualitative methods are more suited to
looking at the meaning of particular events or circumstances.
9) Advantages and Disadvantages: Qualitative
Qualitative Advantages: Appreciates research participant’s individuality
Provides insider view of research question Less structured than quantitative approach
Qualitative Disadvantages: Not always appropriate to generalize results to larger population
Time consuming Difficult to test a hypothesis Proponents of qualitative research argue that such methodology see’s people as individuals, attempting to gather their subjective experience of an event. This can provide a unique insider view of the research question Through the qualitative approach, which is less structured than a quantitative approach, unexpected results and insights can occur.
In summary, to the extent that any study concerns itself with generalizing, case studies tend to generalize to other situations (on the basis of analytic claims), whereas surveys and other quantitative methods tend to generalize to populations (on the basis of statistical claims).
ccording to the lectures, and the book. I learned that, in the social sciences there is no best kind of research. I think researchers probably use several methods in order to conduct research. Empirical, all information is based on observation. Objectivity, Observations is verified by others. Systematic, observations are made in a step-by-step fashion. Controlled, potentially confusing factors are eliminated. Public, built on previous research, open to critique and replication, building towards theories.
1. When we consider the advantages and disadvantages of laboratory vs. field research, are there any others that come to mind that were not outlined in lecture?
A) Field Research/Ethnography: Participant observation is based on living among the people under study for a period of time, could be months or maybe years, and gathering data through continuous involvement in their lives and activities. The ethnographer begins systematic observation and keeps notes, in which the significant events of each day are recorded along with informants and interpretations. These demands are met through two major research techniques participant observation and key informant interviewing. An example would be the one on the video that Maria has been spending several months with Steve a drug user, and the ethical problem come now, the participant do not realize that their behavior is being observed. Obviously (there is no consent) cannot give voluntary informed consent to be involved in the study. Steve confesses that he is HIV positive and his partner does not know, there is a confidentiality issue.
2. Are there some things we can do in the field that we just cannot do in the lab and vise-versa?
A) I learned that clear advantage of laboratory experiments over field experiments is that it is much easier to obtain large amounts of very detailed information from participants in the laboratory. An important reason why laboratory experiments are more artificial than field experiments is because the  participants in laboratory experiments are aware that their behavior. One of the advantages of field experiments over laboratory experiments is that the behavior of the participants is often more of their normal behavior. The greatest advantage of field experiments over laboratory experiments is that they are less artificial
A) I learned that, the method of investigation used most often by psychologists is the experimental method. Some of the advantages of the experimental method are common to both laboratory and field experiments. I would have to know reliability and validity and field vs. laboratory research. To avoid any confounding variables. These are variables that are manipulated/allowed to vary systematically along with the independent variable. The presence of any confounding variables can destroy the experiment, because it prevents from being able to interpret our findings.
Four main elements:
Cross-sectional Comparative Research -- comparing data from one time period between two or more nations (many cases, 1 time period)
Consider generalizing the results of research done on a small sample to the general population.
I think I need to consider the Type of design chosen: Questions the conditions under which the findings be generalized deals with the ability to generalize the findings outside the study to other populations and environments.
Purpose of Research Design: Provides the plan or blueprint for testing research questions and hypotheses. Involves structure and strategy to maintain control and intervention fidelity. Accuracy: Accomplished through the theoretical framework and literature review. All aspects of the study systematically and logically follow form the research questions. Time: Is there enough time for completion of the study. Control: Achieved with steps taken by the searcher to hold the conditions of the study uniform and avoid or decrease the effect of intervening, extraneous, or mediating variables of the dependent variable or outcome. Ensures that every subject receiving the intervention of treatment receive the identical intervention or treatment.
what are some of the benefits and negatives of qualitative and quantitative research: Variables that occur during the study that interfere with or influence the relationship between the independent and dependent variables.
Intervening and mediating variables are processes that occur during the study.
Objectivity can be achieved form a thorough review of the literature and the development of a theoretical framework.
Instrumentation: changes in equipment used to make measurements or changes in observational techniques may cause measurements to vary between participants related to treatment fidelity.
Controlling Extraneous Variables Using a homogeneous sample Using consistent data-collection procedures- constancy. A homogeneous sample is one in which the researcher chooses participants who are alike – for example, participants who belong to the same subculture or have similar characteristics. Homogeneous sampling can be of particular use for conducting focus groups because individuals are generally more comfortable sharing their thoughts and ideas with other individuals who they perceive to be similar to them. Patton, M. (2001). Qualitative Research & Evaluation Methods. 
when thinking of this, Could one said to be superior to the other, or are they context specific?
The independent variable is: manipulated by means of a program, treatment, or intervention done to only one group in the study (experimental group ) The control group gets the standard treatment or no treatment.
The dependent variable is a factor, trait, or condition that can exist in differing amounts or types. Not manipulated and pressured to vary with changes in the independent variable The variable the researcher is interested in explaining.
Randomization Each subject in the study has an equal chance of being assigned to the control group or the experimental group.
Assumes that any important intervening, extraneous, or mediating variable will be equally distributed between the groups, minimizing variance and decreasing selection bias.
Testing: Taking the same test more than once can influence the participant’s responses the next time test is taken.
Mixed-Methods Designs - questionnaire) and qualitative (for example, a number of case studies) methods. Mixed-methods research is a flexible approach, where the research design is determined by what we want to find out rather than by any predetermined epistemological position. In mixed-methods research, qualitative or quantitative components can predominate, or both can have equal status.
Mixed-Methods Designs - questionnaire) and qualitative (for example, a number of case studies) methods. Mixed-methods research is a flexible approach, where the research design is determined by what we want to find out rather than by any predetermined epistemological position. In mixed-methods research, qualitative or quantitative components can predominate, or both can have equal status What, then, do we do if we want to look at both breadth and depth, or at both causality and meaning? In those cases, it is best to use a so-called mixed-methods design, in which we use both quantitative (for example, a questionnaire) and qualitative (for example, a number of case studies)
methods. Mixed-methods research is a flexible approach, where the research design is determined by what we want to find out rather than by
any predetermined epistemological position. In mixed-methods research,
Jean Piaget stage of Concrete Operations:
Ages Seven through Eleven
Jean Piaget devoted his life to how thoughts were transformed into a body of knowledge. His theories  of cognitive development were inspired by observations of his three children from infancy. Piaget believed that children were active participants in learning. He viewed children as busy, motivated explorers whose thinking developed as they acted directly on the environment using their eyes, ears, and hands. According to Piaget, between
· The stage of concrete operations begins when the child is able to perform mental operations. Piaget defines a mental operation as an interiorized action, an action performed in the mind. Mental operations permit the child to think about physical actions that he or she previously performed. The preoperational child could count from one to ten, but the actual understanding that one stands for one object only appears in the stage of concrete operations.
The primary characteristic of concrete operational thought is its reversibility. The child can mentally reverse the direction of his or her thought. A child knows that something that he can add, he can also subtract. He or she can trace her route to school and then follow it back home, or picture where she has left a toy without a haphazard exploration of the entire house. A child at this stage is able to do simple mathematical operations. Operations are labeled “concrete” because they apply only to those objects that are physically present.
Conservation is the major acquisition of the concrete operational stage. Piaget defines conservation as the ability to see that objects or quantities remain the same despite a change in their physical appearance. Children learn to conserve such quantities as number, substance (mass), area, weight, and volume; though they may not achieve all concepts at the same time.
STAGE THREE: The Concrete Operational Stage
QUICK SUMMARY: Children have schemata (cognitive structures that contain pre-existing ideas of the world), which are constantly changing. Schemata constantly undergo adaptation, through the processes of assimilation and accommodation. When seeing new objects there is a state of tension, and a child will attempt to assimilate the information to see if it fits into prior schemata. If this fails, the information must be accommodated by either adding new schemata or modifying the existing ones to accommodate the information. By balancing the use of assimilation and accommodation, an equilibrium is created, reducing cognitive tension (equilibration).
Focus on “language rather than numbers”
“Embraces “intersubjectivity” or how people may construct meaning…”
Focus on the individual and their real lived experience
Qualitative methods have much to offer when we need to explore people’s feelings or ask participants to reflect on their experiences. As was noted above, some of the earliest psychological thinkers of the late 19th century and early 20th century may be regarded as proto-qualitative researchers. Examples include the ‘founding father’ of psycho-analysis, Sigmund Freud, who worked in Vienna (late 19th century – to mid 20th century), recorded and published numerous case-studies and then engaged in analysis, postulation and
theorizing on the basis of his observations, and the pioneering Swiss developmental psychologist, Jean Piaget (1896 – 1980) who meticulously observed and recorded his children’s developing awareness and engagement with their social world. They were succeeded by many other authors from the 1940s onwards who adopted qualitative methods and may be regarded as contributors to the development of qualitative methodologies through their emphasis of the importance of the idiographic and use of case studies (Allport,1946; Nicholson, 1997)1 . This locates the roots of qualitative thinking in the long-standing debate between empiricist and rationalistic schools of thought, and also in social constructionism (Gergen, 1985; King & Horrock, pp. 6 – 24)2. So, what exactly is qualitative research? A practical definition points to methods that use language, rather than numbers, and an interpretative, naturalistic approach. Qualitative research embraces the concept of intersubjectivity usually understood to refer to how people may agree or construct meaning: perhaps to a shared understanding, emotion, feeling, or
perception of a situation , in order to interpret the social world they inhabit (Nerlich, 2004, pp. 18). Norman Denzin and Yvonna Lincoln define qualitative researchers as people who usually work in the ‘real’ world of lived experience, often in a natural setting, rather than a laboratory based experimental approach. The qualitative researcher tries to make sense of social phenomena and the meanings people bring to them (Denzin & Lincoln, 2000)3. In qualitative research, it is acknowledged that the researcher is an integral part of the
process and who may reflect on her/his own influence and experience in the research process.4 The qualitative researcher accepts that s/he is not ‘neutral’. Instead s/he puts herself in the position of the participant or 'subject' and attempts to understand how the world is from that person's perspective. As this process is re-iterated, hypotheses begin to emerge, which are 'tested' against the data of further experiences e.g. people's narratives. One of the key differences between quantitative and qualitative approaches is apparent
here: the quantitative approach states the hypothesis from the outset, (i.e. a ‘top down’ approach), whereas in qualitative research the hypothesis or research question, is refined and developed during the process. This may be thought of as a ‘bottom-up’ or emergent approach, They compare these to assumptions 
Ø 11. Comparative Historical Analysis (Know different types):
It dependents what king of research or for what purpose you are researching. For example in social science is the science of people or collections of people, such as groups, firms, societies, or economies, and their individual or collective behaviors. It can be classified into disciplines such s psychology, sociology, and economics. The society very much is more for the “collective”. I think, using the scientific method is imperative for any kind of research. .
3. What are your ideas as researchers-in-training for accounting for the disadvantages of each and what problems might you foresee arising with your idea?
Historical Events Research focuses on one short historical period (1 case, 1 time period)
Historical Process Research –traces a sequence of events over a number of years (1 case, many time  periods)
Comparative Historical Research – longitudinal comparative research (many cases) over a prolonged period of time. Comparative and Historical Research by number of cases and length of time studied.
What are some of the benefits and negatives of the Case Study method. When compared to other types of research reviewed in the course thus far? Can you think of some specific examples where the case study method might be preferable?
The Case Study Method, Case study is a” Strategy for doing research which involves an empirical investigation of a particular contemporary phenomenon within its real life context using multiple sources of evidence” (Robson, 1993, p. 146). Case studies are in-depth investigations of a single person, group, event or community. Depending on the case study it says whether the research is field or laboratory research, we always loose valuable information about individual variation when we try to collect the information of the experiences, emotions, and behaviors into common experiences that we can measure numerically and generalize across a population. If I understand correctly the lecture, we cannot generalize from a Case Study, must be used three things: Descriptive, Exploratory, and Explanatory.
the independent variable is: the variable that the researcher hypothesizes will have an effect on the dependent variable Usually manipulated (experimental study).









References
Desrochers, S. (2008). From Piaget to Specific Genevan Developmental Models. Child Development Perspectives, 2(1), 7-12. doi:10.1111/j.1750-8606.2008.00034.x

Lourenço, O., & Machado, A. (1996). In defense of Piaget's theory: A reply to 10 common criticisms. Psychological Review, 103(1), 143-164. doi:10.1037//0033-295X.103.1.143

The studies says:

La separación de la madre en la infancia provoca alteraciones en la microbiota (microorganismos) intestinal del bebé que pueden causar el desarrollo de trastornos del comportamiento que persisten hasta la edad adulta, según un estudio realizado en roedores que publica la revista Nature Communications.
Los episodios traumáticos durante la niñez están asociados con un mayor riesgo de desarrollar enfermedades psiquiátricas, metabólicas e intestinales en la edad adulta, aunque los mecanismos por los que se produce este fenómeno en patologías tan diversas se desconocen, según el español Consejo Superior de Investigaciones Científicas (CSIC).
Yolanda Sanz, del Instituto de Agroquímica y Tecnología de Alimentos del CSIC, detalló que el estrés prolongado provocado por la separación de la madre en roedores recién nacidos provoca una disfunción en el eje hipotalámico-hipofisario-adrenal, uno de los principales sistemas de control neuroendocrino del organismo.
"Esto, a su vez, ocasiona alteraciones en diversas funciones fisiológicas afectando, entre otros, al sistema nervioso central y a las emociones", dijo.
Según la científica, en este trabajo se ha demostrado que la separación de la madre en la infancia provoca alteraciones en la composición y funciones de la microbiota intestinal relacionadas con la síntesis de neurotransmisores.
Estas alteraciones, a su vez, son responsables del desarrollo de trastornos del comportamiento como la ansiedad, lo que podría aumentar el riesgo de desarrollar enfermedades psiquiátricas como la depresión en la edad adulta.
En este estudio se emplearon ratones libres de gérmenes y ratones convencionales para poder establecer una relación causal entre el estrés, los trastornos del comportamiento y la microbiota intestinal.
Así, se constató que mientras algunas de las alteraciones neuroendocrinas producidas por el estrés crónico son independientes de la presencia de microbiota, esta es esencial para el desarrollo de alteraciones del comportamiento, actuando como factor causal de la ansiedad.
Los resultados del trabajo, liderado por la Universidad McMaster de Canadá, podrían aplicarse en un futuro para mejorar el estado de salud mental y reducir el riesgo de desarrollar patologías psiquiátricas mediante la modulación de la microbiota intestinal a través de la dieta, por ejemplo a través de la administración de bacterias beneficiosas conocidas como probióticos, según el CSIC.




Piaget’s Theory of Development Involving Human Intelligence Incorporates Schemas
Esther Barros-Garcia
 Cañada College



Abstract
Piaget’s theory of development involving human intelligence incorporates the concept of schemas. Schemas are mental representations of ideas, concepts, and objects. As humans we make great efforts to achieve or obtain something to be in a state of understanding and equilibrium. When information is not understood we move into a state of disequilibrium, a feeling of discomfort from unfamiliar information, which drives us to assimilate and accommodate our schemas to return to a state of equilibrium. To Piaget, development = increase and increase complexity of schemata which are the force that keeps us motivated through learning. We strive to be at equilibrium, we do not like frustration of dealing with unfamiliar knowledge.  Equilibrium: when a child’s schema is capable explaining what he/she perceives form outside world. Disequilibrium: when child experiences new information/ stimuli for the first time. Unsure how to process information and begins to create or expand existing schemas.









Piaget’s Theory of Development Involving Human Intelligence Incorporates Schemas

Schemas are mental representations of ideas, concepts, and objects. An important aspect to the concept of schemas is assimilation which is using an existing schema to deal with/understanding new objects, situations, and information. Also equally important is the concept of accommodation which involves altering existing schemas to develop more complex ones or even brand new schemas altogether to deal/with understanding new information. Lastly, the concept equilibrium is when a schema is fully capable of explaining and interpreting information that is perceived form the outside world. As humans we strive to be in a state of understanding and equilibrium. When information is not understood we move into a state of disequilibrium, a feeling of discomfort from unfamiliar information, which drives us to assimilate and accommodate our schemas to return to a state of equilibrium. To Piaget, development equaled an increased complexity of schemas or schemata .
            Piaget’s theory of development also includes four specific stages of development that are biologically universal to all children. The first stage is the sensorimotor stage (0-2 years). Children in this stage have a cognitive system that is limited to the motor reflexes while infants are busy discovering relationships between their bodies and the environment. The second stage is the preoperational stages (2-6) during this stage, children start to use mental imagery and language. Children here are very egocentric. Piaget claims children in this stage are not able to comprehend cardinality and ordinality (the ability to realize equal quantities) The third stage is the concrete operational (7-11 years) at this stage the child can see and reason with concrete knowledge but still can not see the summary side of things and fully develop all the possible outcomes. They can understand conservation of number like the measure of mass, weight, area and volume. Lastly, the fourth stage is the formal operational (11+ years) this stage is where children are definitely able to think logically and theoretically. They could use symbols that are related to the concepts and easily how problems would be solved. ” To Piaget, this was the ultimate stage of development. He also believed that even though they were here, they still needed to revise their knowledge base. Children by this stage are self-motivators.  They learn from reading and trying out new ideas as well as form helping friends and adults. Piaget believed that not everyone reaches this stage of development.
Definition of numbers, Piaget’s idea of a child‘s ability to understand number includes the capability to compare sets – child’s ability to give the correct answer of equality when items are positioned in one-to-one ratio and if child was able to judge equality when there were fewer then 4-5 items in a set. (Intuitive numbers 1-5). Also important, was the concept of counting sets. Children would count and recount items in a row using words that represented numbers such as “one, two, three” etc. known as counting words. Children learned that the last word used was the expected value outcome of the set. So although the children were able to give the appropriate (number) (#) word as their response regardless of the changing appearance of a row, Piaget believed this did not prove comprehension of number. A child being able to repeat the counting word as the correct answer did not guarantee that the child realized the quantity is equal both times. Tests regarding the abilities stated above were designed to see if children had an understanding of the cardinal property of number, but Piaget’s theory of what it means for a child to comprehend number is more than just a test of cardinality. Later work by one of Piaget’s collaborators incorporated the study of ordinality, a child’s ability to understand equality using continuous as well as discontinuous item (qualities). Ordinal included having a child agree that a set of 30 blocks was larger than a set of 6 blocks (discontinuous amount) blocks from the large set are dumped down a slide, and children are unable to recognize that the new pile forming at the bottom will eventually contain the exact same quantity as the original small set of 6 blocks. They were unable to relate the equality of the new continuous set being formed to the discontinuous set of 6.
Point one: Critics of Piaget claim that he did not play an influential role in the development of child psychology and they could not be more wrong. His critics are wrong because for one, they over simplified Piaget’s theory of child understanding of number. Critics conducted their own research (Gelmen, 1972; McGarrigle &Donaldson, 1975 Mehler & Bever 1967) and found that children as young as 3 were not deceived by the changing appearance of a set and were able to give the correct answer regarding equality. This opposed Piaget’s research, however, although these young children who were still in the preoperational stage were able to provide the correct answer there is no agreement regarding which operational level was required to perform these new conservation tasks. Children involved in post-Piaget research could have easily counted the items in the sets because fewer items were used in these sets as compared to the sets in Piaget’s research. Children could have also relied on they’re natural ability to perceive small numbers (intuitive numbers) (Benat, Lehalle, & Joven 2004).  Without agreement as to which operational level was required to complete the tasks, children from different operational stages could have completed the tasks making the post-Piaget research incomparable to Piaget’s.
The second argument used by Piaget’s critics is that many young children still in the preoperational stage of development had the ability to count in general. Having mastered the ability to count meant a) to always use same sequence of counting words, b) use only one counting word per object, c) using the last counting word to represent the total (quantity of items in set), d) realizing that any set of objects could be counted, and e) understanding that objects could be counted in any order. The ability of the child to count and repeat counting word as the answer became a learned social convention, or learned response, when questioned by the researchers it did not prove comprehension that quantities were equal. When questions were rephrased to the children asking them hand the total number of items to the researchers, they did not know how many items was the correct amount to give.
Critics do not have proper understanding of Piaget’s writing (1998 Lourenco and Mechado 1996) (Bond & Tryphon 2007). Post- Piaget research “works” in proving children have the ability to understand value of cardinal numbers ONLY because they do not involve the Piaget definition of what number is: a necessary synthesis of both ordinality and cardinality. Critics did not include Piaget definition of number in their research; therefore, their arguments against Piaget are invalid. (Desrochers, 2008)
Causality: Piaget’s critics further misunderstand his work regarding a child’s understanding of physical causality by making the mistake of only referring to his earlier books The Child’s Conception of the World (1929/1930) and The Child’s Conception of Physical Reality (1927/1930).  These books only organized children’s explanations for natural occurring (physical) phenomena. His later books Understanding Causality (1971/1974), La Transmission des mouvements (1972), and Epistemology and Phycology of Functions (1968/1977) were the books that involved his ideas of how children reasoned out mechanical causation. Not only were his critics using the wrong material to compare research but Piaget’s ideas of causality were not fully developed until during the 1960’s at the International Centre for Genetic Epistemology (ICGE). So without a clear model to compare, post-Piaget critics have no valid argument to go against Piaget.
Genevan researchers = (Piaget) do not recognize that young children still in the preoperational stage of development can fully process all aspects of mechanical causation and they are correct. Children can form two-term cause and effect relationships for example; two balls colliding. When one ball moves at a higher rate of speed children can see that it causes the second ball to be projected further. This understanding is very basic and can be represented by a formula y = f(x) distance of projected ball = f (amount of force from the first ball). If any other terms are involved preoperational children have trouble explaining the relationship of the added term. Only older children in the concrete operational stage can easily understand a three-term relationship, Piaget said these children have reached a level of understanding composition of functions leading to a basic understanding of more sophisticated models. The research of Piaget’s critics (1980’s) state that Piaget is wrong, and preoperational children can understand causality because in their experiments Shultz (1982) was able to see that children understood which apparatus caused a certain effect, for example choosing a lamp as the cause of a spot of light. However, this understanding of causality relates only to the simple formula Piaget discovered in preoperational children, it does not mean the children have reached an understanding similar to the concrete operational children who understand three-term cause and effect relationships. When all the aspects of Piaget’s (Genevan) developmental model are properly taken into account it is difficult to relate the work of post-Piaget’s critics. Again, it is necessary to understand that the ideas of Piaget were not fully carried out until later research completed from 1955-1980 by his supporters at the ICGE. Research done by his supporters became known as “The Genevan Models” and should be taken into account when evaluating how relevant Piaget actually was in the understanding of child cognitive development.
  Finally, in an article written by Armando Machado (1996) ten of the most common criticisms to Piaget’s ideas are tackled and corrected proving that Piaget was and still is a crucial part to the understanding of children’s cognitive skills and development. One incorrect argument that is often citied is that Piaget’s theory establishes age norms and the new post-Piaget research disconfirms these norms. This is a huge misconception of Piaget’s theory; age is not a criterion to defining a developmental level. According to Piaget the key element was the sequence of cognitive transformations - starting first from sensorimotor, then moving to preoperational, followed by operational, and then finally reaching the developmental level of formal thinking – age as merely an indicator as to which developmental level the child currently possesses it is not the element of which the current level is based on. Critics of Piaget thought that if they were able to show that children who were below the age 11-12 can demonstrate deductive reasoning skills, this would constitute formal thought.(Ennis, 1982) Critics thought that is children who were below 11-12 could possess formal thought patterns this would disprove Piaget. Researchers established tests that showed children of age 5-6 could in fact show simple reasoning and deductive skills. Children were able to correctly conclude that “Mary is at school” from the following reasoning exercise: “If John is at school, then Mary is also at school. John is at school; what can we say about Mary?”  Piaget himself refuted this argument stating that the ability to solve these problems based on perceived logic does not prove a child is using formal operations because when using formal operations the subject must show the ability to comprehend, envision, and select the correct answer from all possible outcomes. Perhaps on the surface it may appear that Piaget’s critics have on occasion disproved him with they’re contrary research, but when examined more closely and when all major aspects of Piaget’s theory of development are incorporated it is clear to see that the findings of many post-Piaget research is not comparable to Piaget’s and in no way dismiss his contribution to the world of psychology














References
Bullock, M., Gelman, R., & Baillargeon, R. (1982). The development of causal reasoning. In W. J. Friedman (Ed.), The development psychology of time (pp. 209–254). New York: Academic Press.
Cowan, R. (1987). When do children trust counting as a basis for relative number judgements? Journal of Experimental Child Psychology, 43, 328–345.
Desrochers, S. (2008). From Piaget to Specific Genevan Developmental Models. Child Development Perspectives, 2(1), 7-12. doi:10.1111/j.1750-8606.2008.00034.x
Gelman, R., Bullock, M., & Meck, E. (1980). Preschoolers’ under- standing of simple object transformations.Child Development, 51, 691–699.
Greco, P. (1960). Recherches sur quelques formes d’infe ́ rences arithme ́tiques et sur la compre ́hension de l’ite ́ration nume ́rique chez l’enfant. In P. Gre ́co, J. B. Grize, S. Papert, & J. Piaget (Eds.), Proble`mes de la construction du nombre (pp. 149–213). Paris, France: Presses Universitaires de France.
Lourenço, O., & Machado, A. (1996). In defense of Piaget's theory: A reply to 10 common criticisms. Psychological Review, 103(1), 143-164. doi:10.1037//0033-295X.103.1.143








URL Intro Video               

URL Syllabus Overview 

URL Lecture 1   

URL Lecture 2   

URL Video Example of Ethnography       

URL Student Field Research Project Examples   

URL Field Research: Margret Mead Documentary            

URL Types of Psychological Research: Discovering Psychology    

URL Lecture 4   

URL Problems with Secondary Sources in Historical Research: The Spanish American War & the Media Documentary      

URL Ongoing issues with journalistic accuracy: The current case of NBC Anchor Brian Williams     


URL What is "Yellow Journalism?" - Overview Video       

                URL Lecture 5   


URL A Case Study in Psychology: Genie Wiley    

URL Lecture Series on Case Studies: Graham R Gibbs at the University of Huddersfield pt 1         
URL Lecture Series on Case Studies by Graham R Gibbs at the University of Huddersfield pt 2     

URL Lecture Series on Case Studies by Graham R Gibbs at the University of Huddersfield pt. 3   

February 23 - March 1    URL Lecture 6   

URL Video lecture from Dr. Kristine Florczak on Quantitative research    

URL Qualitative and Mixed Methods Research  

No comments: