Below are some common queries that we have received on our Telegram Group (link: https://t.me/joinchat/yfYAwnrvpaliMDI1). We update this page regularly.
This is usually seen. You have two options: either work with two factors or drop the negatively worded items
See chapter 6 of DeVellis book. You will also get references from there.
The ratio of 1:10 is advisable but not it does not apply in a linear manner. A sample size of 250-300 should be good even if you have more items in your scale.
A sample size of 400 or above is good even if you have large item sets and the formula of 1:5 or 1:10 is violated.
Reference: DeVellis, R. F. (2016). Scale Development: Theory and Application. Sage.
Refer to pages 139-146 of DeVillis book. It explains various scale options for Likert scale.
Reference: DeVellis, R. F. (2016). Scale Development: Theory and Application. Sage.
This is not possible by looking at statistics. It has to be determined by theory.
Usually when you are doing an individual level analysis, you call it as perceived enjoyment. This is perception and differs from person to person. If you do group level or organisational level analysis (when you aggregate individual responses to get to a group or organisational score), you call it as enjoyment. Because now this is the aggregated score of all members. You will need to do multilevel analysis here.
SEM is testing relationships between latent constructs. This is done in a SEM software (e.g., AMOS).
Regression testing is done between observed, imputed or averaged constructs and a regression software (e.g., SPSS) is used.
SEM = CFA + Regression
Do nothing, just let it remain. It is not too bad
You should not reduce items on your own. EFA is not needed unless you are developing your own scale or have some issues with the existing measure.
Best way is to do CFA (3 or 7 or 15 items) and then prune items that have poor loadings. Ideally, you must retain as many items as possible. Do not drop items of existing measures without any reason.
Amos uses covariance-based SEM (CB-SEM) whereas Partial Least Squares (PLS) method for performing SEM.
If multivariate normality assumption is violated or you have small sample size or the purpose of your study is to predict than confirmation of theory, then you may go for PLS-SEM.
Amos uses the covariance-based method whereas Smart PLS uses the partial least squares approach. The algorithms used in both the software are different. However, some researchers see both these methods of model testing as complementary to each other.
Both can be used for testing parallel mediation. PLS SEM may not give you model fit indices except for R-square.
Here’s a playlist on Partial Least Squares: Structural Equation Modelling by Prof. Arun, a faculty at S K Somaiya College, Mumbai and an SkillsEdge community member. Hope you may find it to be useful. These sets of videos will help researchers understand the nuances of Structural Equation Modelling using a partial least squares approach.
https://youtube.com/playlist?list=PLGBKkGD8cqiKF27-_UnVbYfPzOYs10zQQ
Please refer https://www.analysisinn.com/post/how-to-calculate-average-variance-extracted-and-composite-reliability/
You can do hierarchical CFA (second-order CFA) for constructs 1 and 2. Do first -order CFA for construct 3.
Another way is to do parceling of items for dimensions of constructs 1 and 2. Then do first order CFA using parcels for constructs 1 and 2, and normal first-order CFA for construct 3.
Control variables are nothing but independent variables that also impact your DVs. So, it is fine to have the other IV as a covariate. The only limitation is that PROCESS will give indirect effect and other outputs only for the variable that you declare as IV, and not for the covariate. To compute for the covariate, you should declare it as IV and the IV as a covariate in the next run.
Suggestions:
The first example discussed in Hayes (2013) textbook, chapter 7 is of this kind.
Reference:
Hayes, A. F. (2013). Introduction to Mediation, Moderation, and Conditional Process Analysis: A Regression-Based Approach. Guilford Press, New York.
Yes, you can use the term and can report the mediation. See the section 6.1 of Hayes (2013). Presence of the relationship between IV and DV is not necessary for testing of mediation.
Reference:
Hayes, A. F. (2013). Introduction to Mediation, Moderation, and Conditional Process Analysis: A Regression-Based Approach. Guilford Press, New York.
Whatever results one presents should have a strong theoretical backing and should be generalisable. If one just goes by what the data is telling, then it may not be very interesting.
Refer Kerr, N. L. (1998). HARKing: Hypothesizing after the results are known. Personality and social psychology review, 2(3), 196-217.
While developing our research models, talking to practitioners is very important. This is what makes our research models relevant and practical.
Moderated-mediation happens only when any (one or more than one) of the paths to or from the mediator is moderated. (If you are talking about moderation of the mediation, then it will be conditional process analysis or moderated-mediation)
If you are using a moderator too, I suggest that you create a measurement model with all constructs (including the moderator) along with the CMB factor. Then, impute the latent factor scores from this measurement model and try running the model using PROCESS.
Yes, it should be possible. Declare one as IV and the other variable as a covariate in PROCESS.
It is fine. You don’t need to drop the items. But, you should definitely report bootstrapping results to show you have taken care of non-normality in the data
don’t we use covariate to include controlled variables in our equations?
Control variables are nothing but independent variables that also impact your DVs. So, it is fine to have the other IV as a covariate. The only limitation is that PROCESS will give indirect effect and other outputs only for the variable that you declare as IV, and not for the covariate. To compute for the covariate, you should declare it as IV and the IV as a covariate in the next run.
You must situate the gap using literature. If you don’t cite literature, it will not be considered valid and can be questioned
Refer: https://www.tandfonline.com/doi/pdf/10.1080/09585192.2013.870311
Does not apply option should usually be used during pilot testing. This will help you understand whether the items in your scale are making sense to the respondents. In actual data collection, it is recommended not to use the does not apply option
You should try to limit yourself to 1-2 per paper. Managing multiple theories may become difficult and make the writing less coherent and fragmented.
Yes. You have to enter all details about the paper (correct format of author names, title, journal, volume, issue and page nod.) correctly in Mendeley. This is the first step. Only then will the citation be proper.
One suggestion for quick and efficient entry in Mendeley. In case DOI is available, go to “Add Entry Manually “ (Just below Files, there is a dropdown option). A new window opens. Go at the bottom, and enter DOI details. Save. All fields will get filled automatically and accurately
Will depend on the kind of paper. If you are doing empirical paper, then anywhere between 20-40 should be good. If it is a review paper, then you will have to show that you have read almost everything that is relevant. Number of papers will be high for a reviewer paper. Usually, there is not upper limit. Exercise your judgment.
2nd suggestion: Check papers published in the journal you are targeting, to get a rough range for the references and citations in your paper.
We can do sentiment analysis of text. The method used for most of the research papers with studies of printed text or blogs.
Objectives come before question. Objectives would define what you want to do and is broad/abstract. Research questions are more directed and specific.
Take reviewer comments very seriously.