Ontology means the nature of existence or the determinants of the nature of existence. It is the researchers’ knowledge (what he or she believes) that determines the nature of existence, not the things being researched. Thus, a researcher’s acceptance or rejection that a phenomenon has a certain shape, size, and color depends on the type of knowledge or believe which influence the researcher’s view of the phenomenon. Therefore, no account of any phenomenon is value free even though objectivity remains the criteria to judge the worth of any research paper.
Epistemology
Epistemology means the method that applies to any scientific investigation. It answers the obvious question; why do you prefer one method than the alternative methods that exist in a particular discipline or field of study?
Aim
An Aim is what you want to achieve from a certain research endeavor. Your aim should direct you to formulate clear research questions and locate data for your research. Without an aim, a researcher will achieve close to nothing. A paper without a clear aim has no meaning! You must find an aim to enable you to put forward a logical argument. Otherwise, your paper will become aimless.
Puzzle
A Puzzle is a hypothetical question that leads a researcher into the investigation of the nature of a phenomenon whose nature is unknown.
Research Question
A Research question means a fundamental question that leads to an inquiry of a given phenomenon, whether known or unknown. It is an overarching question (s) in which the entire research processes draw threads that link one part into the other
Phenomena
Phenomena are things that can be observed in the world through which researchers gain knowledge to understand the world. Thus, a phenomenon is something that a researcher set out to investigate its nature of existence.
Thank you for your post! You will need to conduct sample size analysis.
If you choose to use a power calculator, there are Web sites that offer statistical power calculators. Both of the following Web sites offer options for calculating sample sizes given a variety of problems.
Think also about your design: independent and dependent t-tests, correlation, and 3-5 group ANOVA designs
In order to calculate the necessary sample size for your study, you will need to determine values for three items. Whether you use tables or calculators on Web sites, you will need to know the same three pieces of information each of which is explained in detail under separate headings that follow.
• Statistical Power
• Alpha
• Effect size
Statistical Power
The first piece of information you need when conducting a sample size analysis is the statistical power you want your test to have. Statistical power, often referred to simply as power, is the probability that a given statistical test will detect a real treatment effect or real relationship between variables. You need a large enough sample to ensure a reasonable likelihood of detecting a difference (or relationship) if it really exists in the population. If there is not a reasonable chance of detecting a real treatment effect or relationship, then there probably is not a reason to do the study. High statistical power also helps improve the chances that findings are not due to chance alone.
The accepted value for power (the probability that a test will detect a real treatment effect or real relationship) is .80 (80%).
This means that given a specific sample size, you would expect to find a real, or true, effect 80% of the time. In other words, if a study was repeated 100 times, the null hypothesis would be correctly rejected 80 times if there was indeed a real effect/relationship. For example, if your statistical test has statistical power at .50 (50%), this means that your test will do no better at finding a real or true effect than tossing a coin. You are just as likely to guess the outcome by flipping a coin as you would spending the time to collect and analyze data.
Alpha
The second value that must be determined is the alpha (α) level, which is set by the researcher. Like power, this value is predetermined. The conventional values used are
α = .05 or α = .01. When you choose a larger value for alpha, you expand the rejection region. If you expand this area, you provide more opportunities to reject the null hypothesis (correctly). Thus, larger values of alpha (.05) result in more power than small values (such as .01).
It is standard practice to set the alpha level at .05, for most psychological research.
This means essentially that there is only a 5% chance that you will arrive at the wrong conclusion – or, there is a 95% chance that you will arrive at the right conclusion. (Recall Type I and Type II error from your statistics course.)
Effect Size
The final value that must be determined is the effect size. Unlike power and alpha, which rely on predetermined conventions, this value must be calculated by hand. Recall that an effect size gives an indication of how “large” an effect is or how “strong” a relationship is. If an intervention or the like has a large effect, it stands to reason that fewer people would need to be assessed to detect this effect. Conversely, a small effect will require more people to detect an effect. If you are interested in learning more about effect size, there are entire books written about the topic.
One way to calculate a value for an effect size is to use prior research. Recall from your statistics course that:
Effect size = Mean Difference/Standard Deviation.
For the simple two-group design, the effect size is given by the difference between the before and after treatment means divided by the average standard deviation. Cohen’s d is one measure of effect size. It is based on the t-statistic and is calculated as:
d = M1 – M2
SD
Where M = mean and SD = standard deviation
Cohen specified the following effect size conventions:
Small: d .80
Other measures of effect size include: Square of the correlation coefficients (r2); R2 (from a multiple regression); ω2 (measure of effect size for analysis of variance; provides similar values to R2).
Small effect: ω2 .14
Lipsey and Wilson (1993) provide effect sizes for a number of psychological, educational, and behavioral treatments. This document is included in this week’s readings.
You may need to review literature in your area to determine the magnitude of effect sizes. You do this by extracting means and standard deviations from articles and computing a rough estimate.
There are terms that usually confuse students during data analysis, and I will like to explain it .
What is meant by ‘small’, ‘medium’ and ‘large’?
In Cohen’s terminology, a small effect size is one in which there is a real effect — i.e., something is really happening in the world — but which you can only see through careful study. A ‘large’ effect size is an effect which is big enough, and/or consistent enough, that you may be able to see it ‘with the naked eye’. For example, just by looking at a room full of people, you’d probably be able to tell that on average, the men were taller than the women — this is what is meant by an effect which can be seen with the naked eye (actually, the d for the gender difference in height is about 1.4, which is really large, but it serves to illustrate the point). A large effect size is one which is very substantial.
I’m struggling to understand what is a theory. Unlike what I have read in various places on the www, your blogs provide clear explanations to various research and academic writing issues. Your organization is an expert in research and academic writing. I would appreciate your clear definition of a theory and a theoretical framework.
Thank you for your interest.
A theory is the logic behind how you think about solving a given problem.
A theoretical framework represents different components of a theory or theories necessary to estimate or measure the problem.
I am planning to undertake a research project for my dissertation, how many people do I need to sample to find results, effects/ or relationship?
Hello Stephan,
Thank you for your post! You will need to conduct sample size analysis.
If you choose to use a power calculator, there are Web sites that offer statistical power calculators. Both of the following Web sites offer options for calculating sample sizes given a variety of problems.
• http://calculators.stat.ucla.edu/
If the UCLA power calculator is down please use:
• http://www.stat.uiowa.edu/%7Erlenth/Power/
• http://www.psycho.uni-duesseldorf.de/aap/projects/gpower/
Think also about your design: independent and dependent t-tests, correlation, and 3-5 group ANOVA designs
In order to calculate the necessary sample size for your study, you will need to determine values for three items. Whether you use tables or calculators on Web sites, you will need to know the same three pieces of information each of which is explained in detail under separate headings that follow.
• Statistical Power
• Alpha
• Effect size
Statistical Power
The first piece of information you need when conducting a sample size analysis is the statistical power you want your test to have. Statistical power, often referred to simply as power, is the probability that a given statistical test will detect a real treatment effect or real relationship between variables. You need a large enough sample to ensure a reasonable likelihood of detecting a difference (or relationship) if it really exists in the population. If there is not a reasonable chance of detecting a real treatment effect or relationship, then there probably is not a reason to do the study. High statistical power also helps improve the chances that findings are not due to chance alone.
The accepted value for power (the probability that a test will detect a real treatment effect or real relationship) is .80 (80%).
This means that given a specific sample size, you would expect to find a real, or true, effect 80% of the time. In other words, if a study was repeated 100 times, the null hypothesis would be correctly rejected 80 times if there was indeed a real effect/relationship. For example, if your statistical test has statistical power at .50 (50%), this means that your test will do no better at finding a real or true effect than tossing a coin. You are just as likely to guess the outcome by flipping a coin as you would spending the time to collect and analyze data.
Alpha
The second value that must be determined is the alpha (α) level, which is set by the researcher. Like power, this value is predetermined. The conventional values used are
α = .05 or α = .01. When you choose a larger value for alpha, you expand the rejection region. If you expand this area, you provide more opportunities to reject the null hypothesis (correctly). Thus, larger values of alpha (.05) result in more power than small values (such as .01).
It is standard practice to set the alpha level at .05, for most psychological research.
This means essentially that there is only a 5% chance that you will arrive at the wrong conclusion – or, there is a 95% chance that you will arrive at the right conclusion. (Recall Type I and Type II error from your statistics course.)
Effect Size
The final value that must be determined is the effect size. Unlike power and alpha, which rely on predetermined conventions, this value must be calculated by hand. Recall that an effect size gives an indication of how “large” an effect is or how “strong” a relationship is. If an intervention or the like has a large effect, it stands to reason that fewer people would need to be assessed to detect this effect. Conversely, a small effect will require more people to detect an effect. If you are interested in learning more about effect size, there are entire books written about the topic.
One way to calculate a value for an effect size is to use prior research. Recall from your statistics course that:
Effect size = Mean Difference/Standard Deviation.
For the simple two-group design, the effect size is given by the difference between the before and after treatment means divided by the average standard deviation. Cohen’s d is one measure of effect size. It is based on the t-statistic and is calculated as:
d = M1 – M2
SD
Where M = mean and SD = standard deviation
Cohen specified the following effect size conventions:
Small: d .80
Other measures of effect size include: Square of the correlation coefficients (r2); R2 (from a multiple regression); ω2 (measure of effect size for analysis of variance; provides similar values to R2).
Small effect: ω2 .14
Lipsey and Wilson (1993) provide effect sizes for a number of psychological, educational, and behavioral treatments. This document is included in this week’s readings.
You may need to review literature in your area to determine the magnitude of effect sizes. You do this by extracting means and standard deviations from articles and computing a rough estimate.
There are terms that usually confuse students during data analysis, and I will like to explain it .
What is meant by ‘small’, ‘medium’ and ‘large’?
In Cohen’s terminology, a small effect size is one in which there is a real effect — i.e., something is really happening in the world — but which you can only see through careful study. A ‘large’ effect size is an effect which is big enough, and/or consistent enough, that you may be able to see it ‘with the naked eye’. For example, just by looking at a room full of people, you’d probably be able to tell that on average, the men were taller than the women — this is what is meant by an effect which can be seen with the naked eye (actually, the d for the gender difference in height is about 1.4, which is really large, but it serves to illustrate the point). A large effect size is one which is very substantial.
I’m struggling to understand what is a theory. Unlike what I have read in various places on the www, your blogs provide clear explanations to various research and academic writing issues. Your organization is an expert in research and academic writing. I would appreciate your clear definition of a theory and a theoretical framework.
Hello Vincent!
Thank you for your interest.
A theory is the logic behind how you think about solving a given problem.
A theoretical framework represents different components of a theory or theories necessary to estimate or measure the problem.