Research proposal
Introduction
Is h-index really fair in assessing scientific performance? It was developed by Hirsch to characterize, by means of a single number, both the productivity and the impact or influence of scholar. Being practically simple, and easy in obtaining and calculation, h-index was eagerly accepted by scientists. This measure is used in making decision process for awarding grants and allocation research funds, predicting potential candidates for Nobel Prize. So far, there is no other substitutes were approved by scientific community.
But there are still some disadvantages which were summarized by Bornmann, Mutz, et al. (2011) as follows - it is field-dependent, it may be influenced by self-citations, it does not consider multi-authorship, it is dependent on the scientific age of a scientist, it can never decrease, and it is only weakly sensitive to the highly cited papers.
Attempts to improve existing h-index or to discover a substitutes are being undertaken till now. One of the recently proposed approaches is taken to be tested in this research.
Literature review
Currently scientists are seeking for scalar single number measure or indicator to assess scientist’s contribution. Scalar means the number which can be easily subjected to arithmetical operations such as addition, subtraction and multiplication, without distortion of results’ meaning. In contrast to vector which has not only meaning but direction as well. So we cannot easily add meanings of two vectors. In bibliometrics this can be illustrated by case, when author A cited author B in a critical way, but author B still gets additional citation count, hence his citation indicator is growing up.
Yet Costas & Bordons (2007) found that h-index is highly correlated with the absolute number of publications and citations, which one more time proves its field dependence, also they pointed out that there is a need to include the other dimensions in the analysis of research performance of scientists and the risks of relying only on the h-index take place. Bornmann, Mutz, et al. (2011) have conducted the first meta-analysis of studies that computed correlations between the h index and 37 different variants of the h-index that have been proposed and discussed in the literature by 2010. A high correlation between the h-index and its variants indicated that the h index variants hardly provide added information to the h-index.
Despite this meta analysis results I would like to test the thermodynamic approach which is proposed by Prathap Gangan. It is the latest theory and it is not covered by Bornmann’s analysis. Although, there is no any single researcher who supports Prathap’s theory, it seems for me to be sensible, and Prathap does not give up and he is still trying to prove its significance.
The theory states that each paper has Energy, let e be its denotation, which calculated as e=c2, where c is number of citation, received by this particular paper. Full Energy of author can be calculated as a sum , X=iC, where i=C/P, where C – total number of citations and P - total number of publications. p=X1/3 is the performance indicator, where X=iC, i=C/P, where C – total number of citations and P - total number of publications.
He has been criticized by number of scientists such as Leydersdorf, Waltman and Franceschini who assert that analogy with thermodynamic is just a consilience, and that there are many more special conditional factors in thermodynamic, and their equivalents cannot be found in bibliometrics, such as temperature, pressure, mass and others. Some other indicators were proposed instead such as Integarted Impact Indicator (I3) and Crown indicator.
Leydesdorff & Opthof (2011) tells that unlike Prathap’s scalar measures (Energy, Exergy, and Entropy or EEE), the Integrated Impact Indicator (I3) is based on non-parametric statistics using the percentiles of the distribution. This different approach take into account not only ratio and scale of the sample, but the shape of distribution as well. Crown Indicator was introduced by the Centre for Science and Technology Studies at Leiden, and it was actually the first attempt of practicing normalization mechanism, and hence known as the CWTS approach. Prathap gives a comprehensive overview of all indicators development in his paper, where he says that ‘‘crown indicator’’ is a variation of Schubert and Braun’s (1986) RCR = MOCR/MECR. The other general name for this approach among researchers is “add-divide” method, because calculation sequence is like this: Count all citations to the unit’s publications and add them together. Then add together all the world citation averages that correspond to the selected publications with respect to document type, publication year and research area. Then divide the sum of citations with the sum of world averages. This was challenged by Opthof and Leydesdorff (2010), who proposed an alternative “divide-add” approach. In response to this, a new crown indicator was introduced by CWTS: the mean normalized citation score (MNCS) (Waltman et al.2011b). Bornmann and Mutz (2011) summed this up very neatly that both old and new crown indicators suffer from the weakness that all the operations are based on arithmetic averages of ratios or ratios of arithmetic averages (Bornmann and Mutz 2011). As citation data is highly skewed, this will not lead to robust measures. Instead, Bornmann and Mutz (2011) extend an earlier idea (Bornmann 2010) to calculate a single number measure for the citation impact that is not based on the arithmetic average but uses reference distributions based on the calculation of percentiles. An expected value (EV) is then proposed but as we shall show later, this is a overall quality proxy and not a proxy for total performance. Leydesdorff et al. (2011) make the same observation.
But Prathap is still insisting on the successful application of Exergy approach in bibliometrics. He came up with this theory through number of other suggested indicators such as mock h-index, p-index, composite indicator and Expected Value. It was found that “where the sample size is large (e.g., the scientific performance of 233 countries) and the values of citations and papers are also very large, the mock h-index and the original h-index are virtually indistinguishable” Prathap (2010).
Another example is given by Prathap where he analyzed author productivity for six fellows elected in 2006 to the Royal Society. It was shown that p-index or Exergy was differ from h-index in favor of scientists who got much more citations than the number of paper he has published.
According to Bornmann & Marx, (2011) further studies are needed to examine the significance of the h index in different fields of application. According to Mingers (Mingers J. Measuring the research contribution of management academics using the Hirsch-index. Journal of the Operational Research Society 2009;60(9):1143-1153. doi:10.1057/jors.2008.94), some priorities for future related studies are:
• Validity of the h index in large and diverse groups of researchers;
• Comparability of the h index across and within social sciences;
• Validation of the h index by more sophisticated bibliographic analyses.”
Problem statement
Up to now h-index remains the only indicator of the extent of scientific performance. Furthermore, it is no longer being used as a measure of scientific achievement only for single researchers (Glänzel, 2006). The index is also being used to measure the scientific output of research groups (van Raan, 2006), scientific facilities (Kinney, 2007), and countries (Csajbók, Berhidi, Vasas, & Schubert, 2007). This measure is used in making decision process for awarding grants and allocation research funds, predicting potential candidates for Nobel Prize.
So far, there is no other substitutes were approved by scientific community.
But h-index is yet not perfect, in case when citation count of one paper significantly exceeds the total number of paper. Another potential distortion factor could be found in a high self-citation rate. And h-index can never be more than number of papers. And once high mean of h-index is reached, researcher can leave all his or her worry because h-index will never decrease. Like other bibliometric measures, the h index depends on the length of an academic career, and it should be used for comparing researchers of similar age (Bornmann & Marx, 2011). But Exergy is showing current state of author activity and it allows to represent scientist’s activity in many ways – including chronologically.
Objectives of the Study
The main purpose of the present study is to apply bibliometric analysis, such as newly proposed Thermodynamic approach to count Xergy which substitute of h-index to indicate author productivity in area of science, who have been publishing since 1945 till 2011 years.
The 66-year data to be harvested from a databases such as Web of Science, Google scholar and Scopus, which provided the necessary data to support a bibliometric study. Hence, the objectives of this study are to determine the following:
(1) To find out top productive authors in Malaysian by counting their h-index and Exergy index.
(2) To test the Thermodynamic approach.
(3) To comparing the rank by Exergy index and h-index.
(4) To find out the significance of the difference in rank by Xeregy index and h-index, if there is any.
(5) To compare results of Google Scholar, Scopus and Web of Knowledge in terms of completeness of available data.
Research Questions
The research questions follow the objectives of the study:
(1) What is the significance of newly proposed Thermodynamic approach in bibliometrics?
(2) What is the difference between author productivity rank built up by Exergy and h-index?
(3) Who are the productive authors publishing in Malaysia?
(4) What are the differences among databases Google Scholar, Scopus and Web of Knowledge providing bibliographic data for calculating author productivity?
Significance of the Study
Scientific dispute is going on about significance of the approach which was newly proposed by Prathap. Majority of authors consider his discover as a consilience. But at the same time they are not so strict towards cons of h-index. Test of this thermodynamic approach will reveal the practical results of its application. Moreover, there was not done before research in the area of author productivity pattern in Malaysia.
Methodology
Sample of authors will be chosen in a particular scientific area. All available bibliographic details such as number of publications, citation count, h-index for each author will be harvested from Scopus, Web of Knowledge, and Google Scholar. As Glanzel (2003) noticed that publication activity in longer observation periods is greater than in short periods since publication activity is cumulative process, data will be taken starting from the earliest period till now.
To count Exergy for each author according to the thermodynamic paradigm. To find out how much significant is the difference between the meaning is, if there is any. And to find out if there is any change in author ranking.
Full Energy of author can be calculated as a sum , X=iC, where i=C/P, where C – total number of citations and P - total number of publications. p=X1/3 is the performance indicator.
References
Bornmann, L., R. Mutz, et al. (2011). "A multilevel meta-analysis of studies reporting correlations between the h index and 37 different h index variants." Journal of Informetrics.
Bornmann, L., & Marx, W. (2011). The h index as a research performance indicator. European Science Editing, 37(3), 77-80.
Bornmann, L. and H. D. Daniel (2009). "The state of h index research. Is the h index the ideal way to measure research performance?" EMBO reports 10(1): 2.
Costas, R., & Bordons, M. (2007). The h-index: Advantages, limitations and its relation with other bibliometric indicators at the micro level. Journal of Informetrics, 1(3), 193-203.
Glanzel, W. (2003). Bibliometrics as a research field: a course on theory and application of bibliometric indicators.
Leydesdorff, L., & Opthof, T. (2011) A rejoinder on energy versus impact indicators. Scientometrics, 1-4.
Leydesdorff, L. (2009). How are new citation-based journal indicators adding to the bibliometric toolbox? Journal of the American Society for Information Science & Technology.
Opthof, T., & Leydesdorff, L. (2011). A comment to the paper by Waltman et al., Scientometrics, 87, 467–481, 2011. Scientometrics, 88(3), 1011-1016.
Opthof, T., & Wilde, A. (2009). The Hirsch-index: a simple, new tool for the assessment of scientific output of individual scientists. Netherlands Heart Journal, 17(4), 145-154.
Opthof, T., & Wilde, A. (2011). One more time: bibliometric analysis of scientific output remains complicated. Netherlands Heart Journal, 19(7), 359-360.
Park, H., & Leydesdorff, L. (2009). Knowledge linkage structures in communication studies using citation analysis among communication journals. Scientometrics, 81(1), 157-175.
Prathap, G. (2011). A comment to the papers by Opthof and Leydesdorff, Scientometrics, 88, 1011–1016, 2011 and Waltman et al., Scientometrics, 88, 1017–1022, 2011. Scientometrics, 1-7.
Prathap, G. (2010a). The 100 most prolific economists using the p-index. Scientometrics, 84(1), 167-172.
Prathap, G. (2010b). Going much beyond the Durfee square: enhancing the h T index. Scientometrics, 84(1), 149-152.
Prathap, G. (2010c). The iCE approach for journal evaluation. Scientometrics, 85(2), 561-565.
Prathap, G. (2010d). An iCE map approach to evaluate performance and efficiency of scientific production of countries. Scientometrics, 85(1), 185-191.
Prathap, G. (2010e). Is there a place for a mock h-index? Scientometrics, 84(1), 153-165.
Prathap, G. (2011a). The Energy–Exergy–Entropy (or EEE) sequences in bibliometric assessment. Scientometrics, 87(3), 515-524.
Prathap, G. (2011b). The fractional and harmonic p-indices for multiple authorship. Scientometrics, 86(2), 239-244.
Prathap, G. (2011c). Letter to the Editor: Comments on the paper of Franceschini and Maisano: Proposals for evaluating the regularity of a scientist’s research output. Scientometrics, 88(3), 1005-1010.
Prathap, G. (2011d). The quality-quantity-quasity and energy-exergy-entropy exegesis of expected value calculation of citation performance. Scientometrics, 1-7.
Prathap, G. (2011e). Quasity, when quantity has a quality all of its own—toward a theory of performance. Scientometrics, 88(2), 555-562.
Shi, A., & Leydesdorff, L. (2011). What do the cited and citing environments reveal about Advances in Atmospheric Physics? Advances in Atmospheric Sciences, 28(1), 238-244.
Vinkler, P. (2010a). The π-index: a new indicator to characterize the impact of journals. Scientometrics, 82(3), 461-475.
Vinkler, P. (2010b). The Evaluation of Research by Scientometric Indicators. Cambridge: Woodhead Publishing Limited.
Waltman, L., van Eck, N., van Leeuwen, T., Visser, M., & van Raan, A. (2011a). On the correlation between bibliometric indicators and peer review: reply to Opthof and Leydesdorff. Scientometrics, 88(3), 1017-1022.
Waltman, L., van Eck, N., van Leeuwen, T., Visser, M., & van Raan, A. (2011b). Towards a new crown indicator: an empirical analysis. Scientometrics, 87(3), 467-481.
Waltman, L., van Eck, N. J., van Leeuwen, T. N., Visser, M. S., & van Raan, A. F. J. (2011c). Towards a new crown indicator: Some theoretical considerations. Journal of Informetrics, 5(1), 37-47.
Waltman, L., Yan, E., & van Eck, N. (2011). A recursive field-normalized bibliometric performance indicator: an application to the field of library and information science. Scientometrics, 89(1), 301-314.
Jacsó, P. (2008). The plausibility of computing the h-index of scholarly productivity and impact using reference-enhanced databases. Online Information Review, 32(2), 266-283.