Frequently Asked Questions

Common questions about CivStat's methodology, data quality, and interpretation. For detailed technical documentation, see the Methodology page.

πŸ“„ Cite CivStat

Use this BibTeX entry for academic citations:

@misc{civstat2025,
  author = {Kim, Simon},
  title = {CivStat: A Multi-Dimensional Index for Measuring Civilizational Progress},
  year = {2025},
  url = {https://civstat.com/methodology},
  note = {Version 1.0. Data and methodology available at civstat.com}
}

Understanding CivScore

Is a higher CivScore always better?

+

Not necessarily. While a higher CivScore generally indicates better outcomes across the six dimensions, context is critical. For example, a country might score high on Welfare and Knowledge but low on Sustainability β€” meaning it has achieved prosperity at the expense of environmental health. Similarly, rapid economic growth might boost Welfare scores while temporarily depressing Freedom scores if authoritarian governance is used to accelerate development (the so-called "development dictatorship" model). The CivScore is a composite snapshot, not a moral judgment. We encourage users to examine individual dimension scores rather than relying solely on the aggregate.

Methodology

Why 78 indicators and not some other number?

+

The 78 indicators emerged organically through a rigorous selection process based on three criteria: (1) data availability β€” the indicator must have reliable data from at least one major international database; (2) temporal depth β€” the indicator should be measurable or estimable for historical periods, not just the modern era; (3) conceptual distinctiveness β€” each indicator must capture something meaningfully different from existing indicators. We started with over 200 candidate indicators from academic literature, UN databases, and composite index frameworks (HDI, SPI, GPI, EPI). After removing duplicates, indicators with insufficient data coverage, and those too highly correlated with existing measures (r > 0.95), 78 survived. This number is not fixed β€” our roadmap includes adding new indicators as data quality improves, particularly in areas like AI governance, digital rights, and mental health.

How does CivStat differ from HDI, SPI, or other composite indexes?

+

CivStat differs from existing composite indexes in four fundamental ways: (1) Temporal scope β€” HDI (since 1990), SPI (since 2014), and GPI (since 2008) cover only recent decades. CivStat spans 7,000 years, using historical estimation methods to extend measurement into the deep past. This enables analysis of civilizational arcs that are invisible in shorter time series. (2) Dimensional breadth β€” HDI uses 3 dimensions (health, education, income). SPI uses 3 pillars with 12 components. CivStat uses 6 dimensions with 78 indicators, covering areas like environmental sustainability and international cooperation that other indexes omit. (3) Conflict integration β€” Most development indexes ignore or underweight conflict. CivStat treats conflict as a core dimension (βˆ’15% weight) following Pinker's per-capita violence methodology. (4) Population normalization β€” CivStat normalizes all indicators by population, enabling meaningful comparison between a world of 5 million people (5000 BCE) and 8 billion (2024). A war killing 100,000 in a world of 5 million is categorically different from the same number in a world of 8 billion.

Why is Sustainability weighted at only 10%?

+

The 10% weight for Sustainability is one of CivStat's most debated design decisions β€” and we acknowledge the tension. The rationale: (1) Historical measurement β€” for 6,900 of the 7,000 years covered, sustainability was essentially constant (near 100). Weighting it heavily would make the pre-industrial CivScore almost entirely determined by environmental status, which was uniformly high and uninformative. (2) Avoiding double-counting β€” environmental degradation's effects appear in other dimensions: conflict over resources (Conflict), health impacts of pollution (Welfare), climate policy cooperation (Cooperation). (3) Forward-looking adjustment β€” we are actively researching a "time-variable weighting" system where Sustainability weight would increase for modern periods (20th century onward) when it becomes the binding constraint. This is planned for CivStat v2.0. We agree that in a forward-looking assessment, sustainability should arguably carry the highest weight. The current 10% reflects the challenge of building a single weighting scheme that works across 7,000 years.

What is the confidence weighting system?

+

CivStat assigns a confidence score (0.0–1.0) to every data point, reflecting the reliability of the underlying measurement. The system works as follows: Direct measurement (confidence 0.8–1.0): Values from official statistics, censuses, or scientific instruments. Examples: post-1900 life expectancy data from WHO, COβ‚‚ measurements from Mauna Loa since 1958. Indirect evidence (confidence 0.5–0.8): Values inferred from correlated evidence. Examples: medieval population estimates from tax records, pre-1900 literacy from school enrollment data. Scholarly estimate (confidence 0.2–0.5): Values based on academic consensus or model back-projections. Examples: ancient GDP per capita from Maddison Project, prehistoric violence rates from archaeological evidence. How confidence affects scoring: When aggregating indicators into composite scores, each value is weighted by its confidence score. A life expectancy estimate for 500 CE with confidence 0.3 contributes only 30% as much to the composite as a 2020 value with confidence 0.95. This prevents low-confidence historical estimates from dominating the composite score while still including them in the visualization.

Data Quality

Can I trust data from developing countries?

+

Data quality varies significantly across countries, which is exactly why CivStat implements a three-tier confidence system. Each data point is tagged with a confidence grade: "Direct" (measured from official statistics, censuses, or scientific instruments), "Indirect" (inferred from correlated evidence like tax records or satellite imagery), or "Estimated" (based on scholarly consensus or model projections). For developing countries with weaker statistical infrastructure, more data points carry "Indirect" or "Estimated" confidence tags. We also apply confidence weighting in our aggregation β€” data points with lower confidence contribute less to composite scores. This doesn't eliminate the problem, but it transparently communicates uncertainty rather than presenting all data as equally reliable. In the Data Room, you can filter by confidence level to see which values are directly measured versus estimated.

How are historical periods before modern statistics estimated?

+

Estimating historical data requires a combination of methods depending on the period and indicator. CivStat uses four primary approaches: (1) Archaeological evidence β€” skeletal analysis for violence rates (Keeley, LeBlanc), burial sites for nutrition and disease, settlement patterns for population estimates. (2) Documentary evidence β€” tax records, census fragments, court documents, trade ledgers. Available from ~2000 BCE in Mesopotamia and Egypt, increasingly reliable from ~1500 CE. (3) Proxy measurement β€” ice core COβ‚‚ for atmospheric composition, tree rings for climate, coin hoards for economic activity, manuscript counts for knowledge production. (4) Scholarly back-projection β€” researchers like Angus Maddison (GDP), Peter Turchin (social complexity), and the SESHAT project have developed rigorous methods to estimate historical values based on theoretical models validated against known data points. Between anchor points (years with scholarly consensus values), CivStat uses cosine-based interpolation rather than linear interpolation, reflecting the typically gradual nature of civilizational change. The interpolation formula is: v(t) = vβ‚€ + (v₁ βˆ’ vβ‚€) Γ— Β½(1 βˆ’ cos(Ο€ Γ— (t βˆ’ tβ‚€) / (t₁ βˆ’ tβ‚€)))

Country Specific

Why does South Korea score relatively low on Freedom?

+

South Korea's Freedom score (approximately 72–76 out of 100) is strong by global standards but lower than top-performing Nordic countries (90+). Several factors contribute: (1) Press freedom β€” RSF ranks South Korea around 40th–50th globally, citing concerns about media ownership concentration and government pressure on public broadcasters; (2) The National Security Act β€” South Korea maintains broad national security laws (partly due to the ongoing North Korean threat) that can restrict political expression; (3) Digital rights β€” Freedom House classifies South Korea's internet as "Partly Free" due to content restrictions and surveillance capabilities; (4) Gender equality β€” WEF ranks South Korea around 100th+ on the Gender Gap Index, reflecting persistent gaps in economic participation and political representation; (5) Labor rights β€” restrictions on collective bargaining and worker protections. The geopolitical proximity to North Korea creates genuine security concerns that partially explain some restrictions, but CivStat measures outcomes, not justifications.

Data Updates

Is the 2024 data complete?

+

Partially. CivStat updates data on a rolling basis as international organizations release their annual reports. Most major datasets (World Bank WDI, UNDP HDR, Freedom House, V-Dem) publish data with a 1–2 year lag, meaning "2024 data" for some indicators actually reflects 2022 or 2023 measurements. The update cycle works as follows: (1) Annual batch update (January–March): incorporate all major dataset releases from the previous year; (2) Mid-year refresh (June–July): update fast-moving indicators like COβ‚‚ concentrations, conflict data, and economic statistics; (3) Event-driven updates: major geopolitical events (wars, coups, economic crises) trigger immediate reprocessing of affected indicators. The footer of each page shows the "Data last updated" date. In the Data Room, individual indicators show their last-updated timestamp. We aim for full-year coverage within 6 months of year-end, but some specialized indicators (e.g., SESHAT historical data, ice core measurements) update on multi-year cycles.

Features

How can I compare two countries?

+

Currently, CivStat focuses on global aggregate data β€” tracking the trajectory of human civilization as a whole. Country-level comparison is on our roadmap as a high-priority feature. When available, it will include: (1) Side-by-side radar charts showing all six dimensions; (2) Time-series comparison for any indicator; (3) Relative ranking within each dimension; (4) "Development trajectory" view showing how a country's scores have changed over time. For now, the Data Room provides raw indicator data that can be filtered by region (though the current dataset focuses on global aggregates). Country-level granularity is expected to launch with CivStat v2.0. In the interim, you can access country-level data through our primary data sources (Our World in Data, V-Dem, World Bank) linked from the Methodology page.

How do I view historical trends?

+

The Dashboard provides time-series visualizations spanning 7,000 years of civilization history. You can: (1) Use the Year Slider to navigate through time; (2) View the Global Trajectory Chart showing the CivScore and all six dimension scores over time; (3) Click on historical events (e.g., Fall of Rome, Industrial Revolution) to see their impact on each dimension; (4) Toggle uncertainty bands to see data confidence levels for different periods. The chart uses logarithmic scaling for recent centuries (where changes accelerate) and linear scaling for earlier periods. Historical events are annotated with contextual explanations. For detailed time-series data on individual indicators, the Data Room provides tabular access with export capabilities.

Academic Use

Can I cite the CivStat methodology in academic papers?

+

Yes! CivStat is designed to be citable in academic work. Please use the following BibTeX entry: @misc{civstat2025, author = {Kim, Simon}, title = {CivStat: A Multi-Dimensional Index for Measuring Civilizational Progress}, year = {2025}, url = {https://civstat.com/methodology}, note = {Version 1.0. Data and methodology available at civstat.com} } If you use specific data visualizations, please also cite the underlying data sources listed on each indicator's methodology page. CivStat aggregates and normalizes data from dozens of academic and institutional sources β€” proper attribution should include both CivStat (for the framework and normalization) and the original data providers (for the raw data). We welcome academic scrutiny and peer review of our methodology. If you publish work using CivStat data, we'd love to hear about it β€” please contact simon@hashed.com.

Roadmap

Are you planning to add more indicators?

+

Yes, actively. Our indicator roadmap includes several planned additions: Short-term (2025–2026): AI Governance Index (measuring regulatory frameworks for AI), Space Activity Index (commercial and scientific space programs), Digital Infrastructure Quality (5G coverage, data center density), Mental Health Coverage (access to mental health services). Medium-term (2026–2027): Institutional Trust Index (public trust in government, media, science), Biodiversity Intactness Index (finer-grained ecological health), Ocean Health Index (marine ecosystem status), Circular Economy Score (waste reduction and material reuse). Long-term vision: We aim to reach ~120 indicators by 2028, maintaining the principle that each indicator must be conceptually distinct, have reliable data sources, and ideally be estimable for historical periods. Community suggestions are welcome β€” please submit indicator proposals via GitHub Issues or email.

General

Is CivStat open source?

+

Yes. CivStat is fully open source. The complete codebase β€” including data processing pipelines, normalization algorithms, visualization code, and the website itself β€” is available on GitHub at github.com/seojoonkim/civstat under the MIT License. You can: (1) Inspect every calculation to verify how scores are computed; (2) Propose changes to the methodology via pull requests; (3) Fork the project to create alternative visualizations or weighting schemes; (4) Download raw data for independent analysis. We believe transparency is essential for credibility, especially for a project that attempts to quantify something as complex as civilizational progress. If you find errors, data discrepancies, or methodological concerns, please open a GitHub issue β€” we actively review and respond to all feedback.

Still have questions?

We welcome feedback, corrections, and methodological discussions.