The SCImago Institutions Rankings (SIR) is an annual publication that evaluates the performance of research institutions worldwide. Developed by the SCImago Research Group, the SIR utilizes data from the Scopus database to provide comprehensive assessments of universities, government research organizations, healthcare systems, and private sector research entities. First released in 2009, the SIR is respected for its robust methodology and wide-ranging coverage of various types of institutions.

Key Performance Indicators

The SCImago Institutions Rankings use a combination of three main sets of indicators to evaluate research institutions:

  • Research Performance (50%)
  • Innovation Outputs (30%)
  • Societal Impact (20%)

SIR Ranking Methodology

The SIR ranking methodology incorporates several quantitative metrics, forming a composite indicator based on three main dimensions: research performance, innovation outputs, and societal impact. These metrics are carefully chosen to provide a balanced assessment of each institution's contributions across different areas:

  • Research Performance (50%): This is measured through indicators such as normalized impact (13%), excellence with leadership (8%), total output (8%), scientific leadership (5%), not own journals output (3%), own journals (3%), high-quality publications (2%), international collaboration (2%), open access (2%), and scientific talent pool (2%). These indicators assess the volume, impact, and quality of the research output.
  • Innovation Outputs (30%): This includes innovative knowledge (10%), patents (10%), and technological impact (10%). These indicators reflect the institution's ability to contribute to technological advancements and are measured through patent data and citations in patents.
  • Societal Impact (20%): This is assessed through indicators like altmetrics (3%), web size (3%), authority score (3%), sustainable development goals (5%), female scientific talent pool (3%), and impact in public policy (3%). These metrics reflect the broader societal influence and engagement of the institution.
  • The value of the composite indicator is normalized on a scale of 0 to 100. For comparative purposes, institutions with identical scores share the same rank, leading to gaps in the ranking positions.

Data Collection Methods

The SIR relies on various sources to collect data for its indicators:

  • Research Performance: Data is sourced from the Scopus database, which includes a wide range of academic publications and citation information. Indicators such as publication volume, citation impact, and scientific leadership are directly derived from this comprehensive dataset.
  • Innovation Outputs: Information on patents and industry collaborations is gathered from the PATSTAT database. This includes metrics such as the number of patents filed and cited, which measure the institution’s contribution to innovation.
  • Societal Impact: Data on web presence and social media mentions are collected using tools like Google, Semrush, and PlumX Metrics. This includes indicators such as website size, altmetrics, and authority score, which reflect the institution's online influence and societal engagement. Additionally, the Overton database is used to identify documents cited in policy documents, and Unpaywall is used to identify open access documents.
  • For the 2024 edition, the societal factor includes new indicators reflecting contributions to the United Nations Sustainable Development Goals, participation of women in research, and the use of research in public policy.
  • Standardization Process: The SCImago Research Group undertakes extensive manual and automated disambiguation of institution names to ensure accuracy in attributing publications and citations. This involves resolving issues related to institution mergers, name changes, and multiple affiliations.

SIR Services and Offerings

Beyond its annual rankings, the SCImago Research Group offers several services related to research performance analysis:

  • Institutional Performance Reports: Customized reports providing detailed insights into an institution's research performance, strengths, and areas for improvement.
  • Benchmarking Services: Tools for comparing an institution’s performance against peers, helping identify opportunities for strategic development.
  • Consulting Services: Tailored advice to help institutions enhance their research impact, innovation capabilities, and societal engagement.
  • Workshops and Training: Educational programs designed to help institutions and researchers understand and utilize performance metrics effectively.

Criticisms of SIR Ranking

While the SCImago Institutions Rankings (SIR) are widely respected for their comprehensive approach to evaluating research institutions, several criticisms have been raised regarding their methodology and the potential biases in their assessment. These criticisms include the complexity of the ranking system, the robustness of innovation and societal impact indicators, and perceived regional biases.

  1. Complexity of Methodology

    The SIR methodology is intricate, involving a wide range of indicators that span research performance, innovation outputs, and societal impact. This complexity can be daunting for users who are not familiar with the detailed metrics and calculations involved. Critics argue that the sophisticated nature of the composite indicator, which normalizes various metrics on a scale of 0 to 100, can obscure the underlying data and make it difficult for stakeholders to fully understand and interpret the results.

  2. Robustness of Innovation and Societal Impact Indicators

    The measures for innovation and societal impact, which constitute 50% of the total score (30% for innovation and 20% for societal impact), have been scrutinized for their robustness and reliability:

    Innovation Indicators: The indicators for innovation outputs, such as the number of patents filed and the percentage of scientific publications cited in patents, rely heavily on the PATSTAT database. Critics point out that patents and citations in patents are not always the best measures of true innovation. The patenting activity can vary greatly between disciplines and regions, potentially skewing the results in favor of institutions with a strong emphasis on patentable research, often in engineering and technology fields.

    Societal Impact Indicators: Metrics such as altmetrics, web size, and authority score are intended to measure societal engagement and impact. However, these indicators can be volatile and susceptible to manipulation. For example, web size and authority score, derived from tools like Google and Semrush, can be influenced by an institution's SEO strategies rather than its genuine societal contributions. Additionally, altmetrics, which track mentions on social media and other online platforms, can be disproportionately affected by popular but not necessarily high-quality research.

  3. Perceived Regional Bias

    Like many global rankings, the SIR has been criticized for a perceived bias towards institutions in well-funded and research-intensive regions, particularly in North America and Europe. This bias can arise from several factors:

    Resource Disparities: Institutions in wealthier regions often have more resources to invest in research infrastructure, personnel, and publication output, which can lead to higher scores in the research performance indicators. This advantage can overshadow the efforts and achievements of institutions in developing countries that might be making significant contributions under more constrained circumstances.

    Publication and Citation Practices: The reliance on the Scopus database means that institutions that publish more in journals indexed by Scopus, and in English, are favored. This can disadvantage institutions from non-English speaking regions or those that publish in regional journals not indexed by Scopus.

    Innovation and Societal Metrics: Innovation metrics based on patent filings and citations may not accurately reflect the innovative capacities of institutions in regions where patenting is less common or where different forms of innovation, such as social innovations, are more prevalent. Similarly, societal impact metrics based on web presence and social media activity can be biased towards institutions with better online visibility and engagement strategies.

Conclusion

The SCImago Institutions Rankings, while comprehensive and widely used, are not without their flaws. The complexity of the methodology, potential issues with the robustness of innovation and societal impact indicators, and perceived regional biases are significant areas of criticism. Users of the SIR should be aware of these limitations and consider them when interpreting the rankings and making decisions based on them. It is also beneficial to use the SIR in conjunction with other ranking systems and qualitative assessments to obtain a more balanced view of an institution's performance and impact.